Chapter 17. Learning and Memory
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Ann Robinson Neuroscience research got a huge boost last week with news of Professor John O’Keefe’s Nobel prize for work on the “brain’s internal GPS system”. It is an exciting new part of the giant jigsaw puzzle of our brain and how it functions. But how does cutting-edge neuroscience research translate into practical advice about how to pass exams, remember names, tot up household bills and find where the hell you left the car in a crowded car park? O’Keefe’s prize was awarded jointly with Swedish husband and wife team Edvard and May-Britt Moser for their discovery of “place and grid cells” that allow rats to chart where they are. When rats run through a new environment, these cells show increased activity. The same activity happens much faster while the rats are asleep, as they replay the new route. We already knew that the part of the brain known as the hippocampus was involved in spatial awareness in birds and mammals, and this latest work on place cells sheds more light on how we know where we are and where we’re going. In 2000, researchers at University College London led by Dr Eleanor Maguire showed that London taxi drivers develop a pumped-up hippocampus after years of doing the knowledge and navigating the backstreets of the city. MRI scans showed that cabbies start off with bigger hippocampuses than average, and that the area gets bigger the longer they do the job. As driver David Cohen said at the time to BBC News: “I never noticed part of my brain growing – it makes you wonder what happened to the rest of it!” © 2014 Guardian News and Media Limited
by Laura Starecheski From the self-affirmations of Stuart Smalley on Saturday Night Live to countless videos on YouTube, saying nice things to your reflection in the mirror is a self-help trope that's been around for decades, and seems most often aimed at women. The practice, we're told, can help us like ourselves and our bodies more, and even make us more successful — allow us to chase our dreams! Impressed, but skeptical, I took this self-talk idea to one of the country's leading researchers on body image to see if it's actually part of clinical practice. David Sarwer is a psychologist and clinical director at the Center for Weight and Eating Disorders at the University of Pennsylvania. He says that, in fact, a mirror is one of the first tools he uses with some new patients. He stands them in front of a mirror and coaches them to use gentler, more neutral language as they evaluate their bodies. "Instead of saying, 'My abdomen is disgusting and grotesque,' " Sarwer explains, he'll prompt a patient to say, " 'My abdomen is round, my abdomen is big; it's bigger than I'd like it to be.' " The goal, he says, is to remove "negative and pejorative terms" from the patient's self-talk. The underlying notion is that it's not enough for a patient to lose physical weight — or gain it, as some women need to — if she doesn't also change the way her body looks in her mind's eye. This may sound weird. You're either a size 4 or a size 8, right? Not mentally, apparently. In a 2013 study from the Netherlands, scientists watched women with anorexia walk through doorways in a lab. The women, they noticed, turned their shoulders and squeezed sideways, even when they had plenty of room. © 2014 NPR
|By Tori Rodriguez Imagining your tennis serve or mentally running through an upcoming speech might help you perform better, studies have shown, but the reasons why have been unclear. A common theory is that mental imagery activates some of the same neural pathways involved in the actual experience, and a recent study in Psychological Science lends support to that idea. Scientists at the University of Oslo conducted five experiments investigating whether eye pupils adjust to imagined light as they do to real light, in an attempt to see whether mental imagery can trigger automatic neural processes such as pupil dilation. Using infrared eye-tracking technology, they measured the diameter of participants' pupils as they viewed shapes of varying brightness and as they imagined the shapes they viewed or visualized a sunny sky or a dark room. In response to imagined light, pupils constricted 87 percent as much as they did during actual viewing, on average; in response to imagined darkness, pupils dilated to 56 percent of their size during real perception. Two other experiments ruled out the possibility that participants were able to adjust their pupil size at will or that pupils were changing in response to mental effort, which can cause dilation. The finding helps to explain why imagined rehearsals can improve your game. The mental picture activates and strengthens the very neural circuits—even subconscious ones that control automated processes like pupil dilation—that you will need to recruit when it is time to perform. © 2014 Scientific American
Keyword: Learning & Memory
Link ID: 20176 - Posted: 10.08.2014
By LAWRENCE K. ALTMAN A British-American scientist and a pair of Norwegian researchers were awarded this year’s Nobel Prize in Physiology or Medicine on Monday for discovering “an inner GPS in the brain” that enables virtually all creatures to navigate their surroundings. John O’Keefe, 75, a British-American scientist, will share the prize of $1.1 million with May-Britt Moser, 51, and Edvard I. Moser, 52, only the second married couple to win a Nobel in medicine, who will receive the other half. The three scientists’ discoveries “have solved a problem that has occupied philosophers and scientists for centuries — how does the brain create a map of the space surrounding us and how can we navigate our way through a complex environment?” said the Karolinska Institute in Sweden, which chooses the laureates. The positioning system they discovered helps us know where we are, find our way from place to place and store the information for the next time, said Goran K. Hansson, secretary of the Karolinska’s Nobel Committee. The researchers documented that certain cells are responsible for the higher cognitive function that steers the navigational system. Dr. O’Keefe began using neurophysiological methods in the late 1960s to study how the brain controls behavior and sense of direction. In 1971, he discovered the first component of the inner navigational system in rats. He identified nerve cells in the hippocampus region of the brain that were always activated when a rat was at a certain location. © 2014 The New York Times Company
Keyword: Learning & Memory
Link ID: 20169 - Posted: 10.07.2014
By Gretchen Vogel Research on how the brain knows where it is has bagged the 2014 Nobel Prize in Physiology or Medicine, the Nobel Committee has announced from Stockholm. One half of the prize goes to John O'Keefe, director of the Sainsbury Wellcome Centre in Neural Circuits and Behaviour at University College London. The other is for a husband-wife couple: May-Britt Moser, who is director of the Centre for Neural Computation in Trondheim, and Edvard Moser, director of the Kavli Institute for Systems Neuroscience in Trondheim. "In 1971, John O´Keefe discovered the first component of this positioning system," the Nobel Committee says in a statement that was just released. "He found that a type of nerve cell in an area of the brain called the hippocampus that was always activated when a rat was at a certain place in a room. Other nerve cells were activated when the rat was at other places. O´Keefe concluded that these “place cells” formed a map of the room." "More than three decades later, in 2005, May‐Britt and Edvard Moser discovered another key component of the brain’s positioning system," the statement goes on to explain. "They identified another type of nerve cell, which they called “grid cells”, that generate a coordinate system and allow for precise positioning and pathfinding. Their subsequent research showed how place and grid cells make it possible to determine position and to navigate." © 2014 American Association for the Advancement of Science
Keyword: Learning & Memory
Link ID: 20163 - Posted: 10.06.2014
Alison Abbott The fact that Edvard and May-Britt Moser have collaborated for 30 years — and been married for 28 — has done nothing to dull their passion for the brain. They talk about it at breakfast. They discuss its finer points at their morning lab meeting. And at a local restaurant on a recent summer evening, they are still deep into a back-and-forth about how their own brains know where they are and will guide them home. “Just to walk there, we have to understand where we are now, where we want to go, when to turn and when to stop,” says May-Britt. “It's incredible that we are not permanently lost.” If anyone knows how we navigate home, it is the Mosers. They shot to fame in 2005 with their discovery of grid cells deep in the brains of rats. These intriguing cells, which are also present in humans, work much like the Global Positioning System, allowing animals to understand their location. The Mosers have since carved out a niche studying how grid cells interact with other specialized neurons to form what may be a complete navigation system that tells animals where they are going and where they have been. Studies of grid cells could help to explain how memories are formed, and why recalling events so often involves re-envisioning a place, such as a room, street or landscape. While pursuing their studies, the two scientists have become a phenomenon. Tall and good-looking, they operate like a single brain in two athletic bodies in their generously funded lab in Trondheim, Norway — a remote corner of northern Europe just 350 kilometres south of the Arctic Circle. They publish together and receive prizes as a single unit — most recently, the Nobel Prize in Physiology or Medicine, which they won this week with their former supervisor, neuroscientist John O’Keefe at University College London. In 2007, while still only in their mid-40s, they won a competition by the Kavli Foundation of Oxnard, California, to build and direct one of only 17 Kavli Institutes around the world. The Mosers are now minor celebrities in their home country, and their institute has become a magnet for other big thinkers in neuroscience. “It is definitely intellectually stimulating to be around them,” says neurobiologist Nachum Ulanovsky from the Weizmann Institute of Science in Rehovot, Israel, who visited the Trondheim institute for the first time in September. © 2014 Nature Publishing Grou
Keyword: Learning & Memory
Link ID: 20162 - Posted: 10.06.2014
By ALINA TUGEND MANY workers now feel as if they’re doing the job of three people. They are on call 24 hours a day. They rush their children from tests to tournaments to tutoring. The stress is draining, both mentally and physically. At least that is the standard story about stress. It turns out, though, that many of the common beliefs about stress don’t necessarily give the complete picture. MISCONCEPTION NO. 1 Stress is usually caused by having too much work. While being overworked can be overwhelming, research increasingly shows that being underworked can be just as challenging. In essence, boredom is stressful. “We tend to think of stress in the original engineering way, that too much pressure or too much weight on a bridge causes it to collapse,” said Paul E. Spector, a professor of psychology at the University of South Florida. “It’s more complicated than that.” Professor Spector and others say too little to do — or underload, as he calls it — can cause many of the physical discomforts we associate with being overloaded, like muscle tension, stomachaches and headaches. A study published this year in the journal Experimental Brain Research found that measurements of people’s heart rates, hormonal levels and other factors while watching a boring movie — men hanging laundry — showed greater signs of stress than those watching a sad movie. “We tend to think of boredom as someone lazy, as a couch potato,” said James Danckert, a professor of neuroscience at the University of Waterloo in Ontario, Canada, and a co-author of the paper. “It’s actually when someone is motivated to engage with their environment and all attempts to do so fail. It’s aggressively dissatisfying.” © 2014 The New York Times Company
|By Daisy Yuhas Do we live in a holographic universe? How green is your coffee? And could drinking too much water actually kill you? Before you click those links you might consider how your knowledge-hungry brain is preparing for the answers. A new study from the University of California, Davis, suggests that when our curiosity is piqued, changes in the brain ready us to learn not only about the subject at hand, but incidental information, too. Neuroscientist Charan Ranganath and his fellow researchers asked 19 participants to review more than 100 questions, rating each in terms of how curious they were about the answer. Next, each subject revisited 112 of the questions—half of which strongly intrigued them whereas the rest they found uninteresting—while the researchers scanned their brain activity using functional magnetic resonance imaging (fMRI). During the scanning session participants would view a question then wait 14 seconds and view a photograph of a face totally unrelated to the trivia before seeing the answer. Afterward the researchers tested participants to see how well they could recall and retain both the trivia answers and the faces they had seen. Ranganath and his colleagues discovered that greater interest in a question would predict not only better memory for the answer but also for the unrelated face that had preceded it. A follow-up test one day later found the same results—people could better remember a face if it had been preceded by an intriguing question. Somehow curiosity could prepare the brain for learning and long-term memory more broadly. The findings are somewhat reminiscent of the work of U.C. Irvine neuroscientist James McGaugh, who has found that emotional arousal can bolster certain memories. But, as the researchers reveal in the October 2 Neuron, curiosity involves very different pathways. © 2014 Scientific American
By John Bohannon The victim peers across the courtroom, points at a man sitting next to a defense lawyer, and confidently says, "That's him!" Such moments have a powerful sway on jurors who decide the fate of thousands of people every day in criminal cases. But how reliable is eyewitness testimony? A new report concludes that the use of eyewitness accounts need tighter control, and among its recommendations is a call for a more scientific approach to how eyewitnesses identify suspects during the classic police lineup. For decades, researchers have been trying to nail down what influences eyewitness testimony and how much confidence to place in it. After a year of sifting through the scientific evidence, a committee of psychologists and criminologists organized by the U.S. National Research Council (NRC) has now gingerly weighed in. "This is a serious issue with major implications for our justice system," says committee member Elizabeth Phelps, a psychologist at New York University in New York City. Their 2 October report, Identifying the Culprit: Assessing Eyewitness Identification, is likely to change the way that criminal cases are prosecuted, says Elizabeth Loftus, a psychologist at the University of California, Irvine, who was an external reviewer of the report. As Loftus puts it, "just because someone says something confidently doesn't mean it's true." Jurors can't help but find an eyewitness’s confidence compelling, even though experiments have shown that a person's confidence in their own memory is sometimes undiminished even in the face of evidence that their memory of an event is false. © 2014 American Association for the Advancement of Science.
Keyword: Learning & Memory
Link ID: 20157 - Posted: 10.04.2014
Helen Thomson You'll have heard of Pavlov's dogs, conditioned to expect food at the sound of a bell. You might not have heard that a scarier experiment – arguably one of psychology's most unethical – was once performed on a baby. In it, a 9-month-old, at first unfazed by the presence of animals, was conditioned to feel fear at the sight of a rat. The infant was presented with the animal as someone struck a metal pole with a hammer above his head. This was repeated until he cried at merely the sight of any furry object – animate or inanimate. The "Little Albert" experiment, performed in 1919 by John Watson of Johns Hopkins University Hospital in Baltimore, Maryland, was the first to show that a human could be classically conditioned. The fate of Albert B has intrigued researchers ever since. Hall Beck at the Appalachian State University in Boone, North Carolina, has been one of the most tenacious researchers on the case. Watson's papers stated that Albert B was the son of a wet nurse who worked at the hospital. Beck spent seven years exploring potential candidates and used facial analysis to conclude in 2009 that Little Albert was Douglas Merritte, son of hospital employee Arvilla. Douglas was born on the same day as Albert and several other points tallied with Watson's notes. Tragically, medical records showed that Douglas had severe neurological problems and died at an early age of hydrocephalus, or water on the brain. According to his records, this seems to have resulted in vision problems, so much so that at times he was considered blind. © Copyright Reed Business Information Ltd.
Wild marmosets in the Brazilian forest can learn quite successfully from video demonstrations featuring other marmosets, Austrian scientists have reported, showing not only that marmosets are even better learners than previously known, but that video can be used successfully in experiments in the wild. Tina Gunhold, a cognitive biologist at the University of Vienna, had worked with a population of marmoset monkeys in a bit of Brazilian forest before this particular experiment. The forest is not wilderness. It lies near some apartment complexes, and the marmosets are somewhat used to human beings. But the monkeys are wild, and each extended family group has its own foraging territory. Dr. Gunhold and her colleagues reported in the journal Biology Letters this month that they had tested 12 family groups, setting up a series of video monitors, each with a kind of complicated box that they called an “artificial fruit.” All the boxes contained food. Six of the monitors showed just an unchanging image of a marmoset near a similar box. Three of them showed a marmoset opening the box by pulling a drawer, and three others a marmoset lifting a lid to get at the food. Marmosets are very territorial and would not tolerate a strange individual on their turf, but the image of a strange marmoset on video didn’t seem to bother them. Individual marmosets “differed in their reactions to the video,” Dr. Gunhold said. “Some were more shy, some more bold. The younger ones were more attracted to the video, perhaps because of greater curiosity.” © 2014 The New York Times Company
By David Z. Hambrick, Fernanda Ferreira, and John M. Henderson A decade ago, Magnus Carlsen, who at the time was only 13 years old, created a sensation in the chess world when he defeated former world champion Anatoly Karpov at a chess tournament in Reykjavik, Iceland, and the next day played then-top-rated Garry Kasparov—who is widely regarded as the best chess player of all time—to a draw. Carlsen’s subsequent rise to chess stardom was meteoric: grandmaster status later in 2004; a share of first place in the Norwegian Chess Championship in 2006; youngest player ever to reach World No. 1 in 2010; and highest-rated player in history in 2012. What explains this sort of spectacular success? What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates. In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.” Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.” One player needed 22 times more deliberate practice than another player to become a master. © 2014 The Slate Group LLC.
Keyword: Learning & Memory
Link ID: 20136 - Posted: 09.30.2014
by Elijah Wolfson @elijahwolfson The class was the most difficult of the fall 2013 semester, and J.D. Leadam had missed all but one lecture. His grandfather’s health had worsened, and he left San Jose State, where he was studying for a degree in business, to return home to Los Angeles to help out. Before he knew it, midterm exams had almost arrived. At this point, Leadam had, for a while, been playing around with transcranial direct-current stimulation, or tDCS, an experimental treatment for all sorts of health issues that, at its most basic, involves running a very weak electric current through the brain. When he first came across tDCS, Leadam was immediately intrigued but thought, “There’s no way I’m gonna put electrodes on my head. It’s just not going to happen.” After extensive research, though, he changed his mind. He looked into buying a device online, but there wasn’t much available — just one extremely expensive machine and then a bare-bones $40 device that didn’t even have a switch. So he dug around online and figured he could build one himself. He bought all the pieces he needed and put it together. He tried it a few times, but didn’t notice much, so he put it aside. But now, with the test looming, he picked it back up. The professor had written a book, and Leadam knew all the information he’d be tested on was written in its pages. “But I’m an auditory learner,” he said, “so I knew it wouldn’t work to just read it.” He strapped on the device, turned it on and read the chapters. “Nothing,” he thought. But when he got to the classroom and put pen to paper, he had a revelation. “I could remember concepts down to the exact paragraphs in the textbook,” Leadam said. “I actually ended up getting an A on the test. I couldn’t believe it.”
Keyword: Learning & Memory
Link ID: 20130 - Posted: 09.29.2014
By Dick Miller, CBC News Dan Campbell felt the bullets whiz past his head. The tracer rounds zipped between his legs. It was his first firefight as a Canadian soldier in Afghanistan. "I was completely frightened and scared like I’d never been before in my life,” he says. As the attack continued, the sights, sounds and smells started to form memories inside his brain. The fear he felt released the hormone norepinephrine, and in the complex chemistry of the brain, the memories of the battle became associated with the fear. 'I think one day, hopefully in the not-too-distant future, we will be able to delete a memory.'- Dr. Sheena Josselyn, senior scientist, Hospital For Sick Children Research Institute Six years later, a sight or sound such as a firecracker or car backfiring can remind him of that night in 2008. The fear comes back and he relives rather than remembers the moments. "It can be hard. Physically, you know, there’s the tapping foot, my heart beating,” he says. Like so many soldiers and victims of assault or people who have experienced horrific accidents, Campbell was diagnosed with post traumatic stress disorder. Now a newspaper reporter in Yellowknife, Campbell thinks one day he may get therapy. But for now he is working on his own to control the fear and anger the memories bring. © CBC 2014
By Melissa Dahl Recently, I was visiting my family in Seattle, and we were doing that thing families do: retelling old stories. As we talked, a common theme emerged. My brother hardly remembered anything from our childhood, even the stories in which he was the star player. (That time he fell down the basement steps and needed stitches in the ER? Nope. That panicky afternoon when we all thought he’d disappeared, only to discover he’d been hiding in his room, and then fell asleep? Nothing.) “Boys never remember anything,” my mom huffed. She’s right. Researchers are finding some preliminary evidence that women are indeed better at recalling memories, especially autobiographical ones. Girls and women tend to recall these memories faster and with more specific details, and some studies have demonstrated that these memories tend to be more accurate, too, when compared to those of boys and men. And there’s an explanation for this: It could come down to the way parents talk to their daughters, as compared to their sons, when the children are developing memory skills. To understand this apparent gender divide in recalling memories, it helps to start with early childhood—specifically, ages 2 to 6. Whether you knew it or not, during these years, you learned how to form memories, and researchers believe this happens mostly through conversations with others, primarily our parents. These conversations teach us how to tell our own stories, essentially; when a mother asks her child for more details about something that happened that day in school, for example, she is implicitly communicating that these extra details are essential parts to the story. © 2014 The Slate Group LLC
By Maria Konnikova At the turn of the twentieth century, Ivan Pavlov conducted the experiments that turned his last name into an adjective. By playing a sound just before he presented dogs with a snack, he taught them to salivate upon hearing the tone alone, even when no food was offered. That type of learning is now called classical—or Pavlovian—conditioning. Less well known is an experiment that Pavlov was conducting at around the same time: when some unfortunate canines heard the same sound, they were given acid. Just as their luckier counterparts had learned to salivate at the noise, these animals would respond by doing everything in their power to get the imagined acid out of their mouths, each “shaking its head violently, opening its mouth and making movements with its tongue.” For many years, Pavlov’s classical conditioning findings overshadowed the darker version of the same discovery, but, in the nineteen-eighties, the New York University neuroscientist Joseph LeDoux revived the technique to study the fear reflex in rats. LeDoux first taught the rats to associate a certain tone with an electric shock so that they froze upon hearing the tone alone. In essence, the rat had formed a new memory—that the tone signifies pain. He then blunted that memory by playing the tone repeatedly without following it with a shock. After multiple shock-less tones, the animals ceased to be afraid. Now a new generation of researchers is trying to figure out the next logical step: re-creating the same effects within the brain, without deploying a single tone or shock. Is memory formation now understood well enough that memories can be implanted and then removed absent the environmental stimulus?
By Filipa Ioannou Per the Associated Press, the Food and Drug Administration is considering a ban on electric-shock devices that are used to punish unwanted behavior by patients with autism and other developmental disabilities. If it comes as a surprise to you that any involuntary electric shocks are administered to autism patients in the United States, that's because the devices are only used at one facility in the country—the Judge Rotenberg Educational Center in Canton, Mass. The school has been a target of media attention in the past; in 2012, video leaked of 18-year-old patient Andre McCollins being restrained face-down and shocked 31 times. McCollins’ mother sued the center, and the lawsuit was settled outside of court. Rotenberg must get a court’s approval to begin administering skin shocks to a student. The center uses a graduated electronic decelerator, or GED, that is attached to the arms or legs. If the student acts aggressively – head-banging, throwing furniture, attacking someone – then a center worker can press a button to activate the electrode, delivering a two-second shock to the skin. The amount of pain generated by the device is a contentious subject. The Rotenberg Center's Glenda Crookes compared the sensation to “a bee sting” in comments to CBS News, and some Rotenberg parents are strong proponents of the device. But a U.N. official in 2010 said the shocks constituted “torture." An FDA report also addresses the widely held belief that autistic individuals have a high pain threshold, pointing out the possibility that “not all children with ASD express their pain in the same way as a ‘neurotypical child’ would (e.g., cry, moan, seek comfort, etc.), which may lead to misinterpretation by caregivers and medical professionals that patients are insensitive or to an incorrect belief that the child is not in pain.” © 2014 The Slate Group LLC.
By Elizabeth Pennisi "What's for dinner?" The words roll off the tongue without even thinking about it—for adults, at least. But how do humans learn to speak as children? Now, a new study in mice shows how a gene, called FOXP2, implicated in a language disorder may have changed between humans and chimps to make learning to speak possible—or at least a little easier. As a uniquely human trait, language has long baffled evolutionary biologists. Not until FOXP2 was linked to a genetic disorder that caused problems in forming words could they even begin to study language’s roots in our genes. Soon after that discovery, a team at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, discovered that just two bases, the letters that make up DNA, distinguished the human and chimp versions of FOXP2. To try to determine how those changes influenced the gene's function, that group put the human version of the gene in mice. In 2009, they observed that these "humanized" mice produced more frequent and complex alarm calls, suggesting the human mutations may have been involved in the evolution of more complex speech. Another study showed that humanized mice have different activity in the part of the brain called the striatum, which is involved in learning, among other tasks. But the details of how the human FOXP2 mutations might affect real-world learning remained murky. To solve the mystery, the Max Planck researchers sent graduate student Christiane Schreiweis to work with Ann Graybiel, a neuroscientist at the Massachusetts Institute of Technology in Cambridge. She's an expert in testing mouse smarts by seeing how quickly they can learn to find rewards in mazes. © 2014 American Association for the Advancement of Science
by Michael Slezak It's one of the biggest mysteries of Alzheimer's. The disease is associated with the formation of protein plaques in the brain, but why is it that some people with plaques seem not to have the disease? Research suggests that some people's brains are able to reorganise during the early stages of Alzheimer's, delaying the appearance of initial symptoms. The plaques in question are small mounds of a protein called beta-amyloid, and are found in the brains of people with Alzheimer's disease. Whether these plaques are a cause of the disease has been hotly debated. One reason for doubt is the appearance of plaques in many older people who have no symptoms Movie Cameraof dementia at all. Using fMRI to measure changes in blood flow around the brain, William Jagust from the University of California in Berkley and colleagues compared brain function in three groups of people without symptoms of dementia: 22 young people, 16 older people with beta-amyloid plaques and 33 older people without the plaques. He asked each of them to memorise a photographed scene while inside the machine. Jagust found that older people with plaques had increased blood flow – which means stronger activation of that brain area – in the regions of the brain that are usually activated during memory formation, compared with the older people who did not have plaques. The team then analysed whether this extra brain activation might be helping to compensate for the plaques. © Copyright Reed Business Information Ltd.
by Simon Makin Talking in your sleep might be annoying, but listening may yet prove useful. Researchers have shown that sleeping brains not only recognise words, but can also categorise them and respond in a previously defined way. This could one day help us learn more efficiently. Sleep appears to render most of us dead to the world, our senses temporarily suspended, but sleep researchers know this is a misleading impression. For instance, a study published in 2012 showed that sleeping people can learn to associate specific sounds and smells. Other work has demonstrated that presenting sounds or smells during sleep boosts performance on memory tasks – providing the sensory cues were also present during the initial learning. Now it seems the capabilities of sleeping brains stretch even further. A team led by Sid Kouider from the Ecole Normale Supérieur in Paris trained 18 volunteers to classify spoken words as either animal or object by pressing buttons with their right or left hand. Brain activity was recorded using EEG, allowing the researchers to measure the telltale spikes in activity that indicate the volunteers were preparing to move one of their hands. Since each hand is controlled by the motor cortex on the opposite side of the brain, these brainwaves can be matched to the intended hand just by looking at which side of the motor cortex is active. © Copyright Reed Business Information Ltd.