Chapter 17. Learning and Memory
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Ian Sample Science editor For Jules Verne it was the friend who keeps us waiting. For Edgar Allan Poe so many little slices of death. But though the reason we spend a third of our lives asleep has so far resisted scientific explanation, research into the impact of sleepless nights on brain function has shed fresh light on the mystery - and also offered intriguing clues to potential treatments for depression. In a study published on Tuesday, researchers show for the first time that sleep resets the steady build-up of connectivity in the human brain which takes place in our waking hours. The process appears to be crucial for our brains to remember and learn so we can adapt to the world around us. The loss of a single night’s sleep was enough to block the brain’s natural reset mechanism, the scientists found. Deprived of rest, the brain’s neurons seemingly became over-connected and so muddled with electrical activity that new memories could not be properly laid down. Lack of sleep alters brain chemicals to bring on cannabis-style 'munchies' But Christoph Nissen, a psychiatrist who led the study at the University of Freiburg, is also excited about the potential for helping people with mental health disorders. One radical treatment for major depression is therapeutic sleep deprivation, which Nissen believes works through changing the patient’s brain connectivity. The new research offers a deeper understanding of the phenomenon which could be adapted to produce more practical treatments. © 2016 Guardian News and Media Limited
By Emily Underwood In 2010, neurobiologist Beth Stevens had completed a remarkable rise from laboratory technician to star researcher. Then 40, she was in her second year as a principal investigator at Boston Children’s Hospital with a joint faculty position at Harvard Medical School. She had a sleek, newly built lab and a team of eager postdoctoral investigators. Her credentials were impeccable, with high-profile collaborators and her name on an impressive number of papers in well-respected journals. But like many young researchers, Stevens feared she was on the brink of scientific failure. Rather than choosing a small, manageable project, she had set her sights on tackling an ambitious, unifying hypothesis linking the brain and the immune system to explain both normal brain development and disease. Although the preliminary data she’d gathered as a postdoc at Stanford University in Palo Alto, California, were promising, their implications were still murky. “I thought, ‘What if my model is just a model, and I let all these people down?’” she says. Stevens, along with her mentor at Stanford, Ben Barres, had proposed that brain cells called microglia prune neuronal connections during embryonic and later development in response to a signal from a branch of the immune system known as the classical complement pathway. If a glitch in the complement system causes microglia to prune too many or too few connections, called synapses, they’d hypothesized, it could lead to both developmental and degenerative disorders. © 2016 American Association for the Advancement of Science.
By Jessica Hamzelou Feel like you’ve read this before? Most of us have experienced the eerie familiarity of déjà vu, and now the first brain scans of this phenomenon have revealed why – it’s a sign of our brain checking its memory. Déjà vu was thought to be caused by the brain making false memories, but research by Akira O’Connor at the University of St Andrews, UK, and his team now suggests this is wrong. Exactly how déjà vu works has long been a mystery, partly because its fleeting and unpredictable nature makes it difficult to study. To get around this, O’Connor and his colleagues developed a way to trigger the sensation of déjà vu in the lab. The team’s technique uses a standard method to trigger false memories. It involves telling a person a list of related words – such as bed, pillow, night, dream – but not the key word linking them together, in this case, sleep. When the person is later quizzed on the words they’ve heard, they tend to believe they have also heard “sleep” – a false memory. To create the feeling of déjà vu, O’ Connor’s team first asked people if they had heard any words beginning with the letter “s”. The volunteers replied that they hadn’t. This meant that when they were later asked if they had heard the word sleep, they were able to remember that they couldn’t have, but at the same time, the word felt familiar. “They report having this strange experience of déjà vu,” says O’Connor. © Copyright Reed Business Information Ltd.
By Anna Azvolinsky Sets of neurons in the brain that behave together—firing synchronously in response to sensory or motor stimuli—are thought to be functionally and physiologically connected. These naturally occurring ensembles of neurons are one of the ways memories may be programmed in the brain. Now, in a paper published today (August 11) in Science, researchers at Columbia University and their colleagues show that it is possible to stimulate visual cortex neurons in living, awake mice and induce a new ensemble of neurons that behave as a group and maintain their concerted firing for several days. “This work takes the concept of correlated [neuronal] firing patterns in a new and important causal direction,” David Kleinfeld, a neurophysicist at the University of California, San Diego, who was not involved in the work told The Scientist. “In a sense, [the researchers] created a memory for a visual feature that does not exist in the physical world as a proof of principal of how real visual memories are formed.” “Researchers have previously related optogenetic stimulation to behavior [in animals], but this study breaks new ground by investigating the dynamics of neural activity in relation to the ensemble to which these neurons belong,” said Sebastian Seung, a computational neuroscientist at the Princeton Neuroscience Institute in New Jersey who also was not involved in the study. Columbia’s Rafael Yuste and colleagues stimulated randomly selected sets of individual neurons in the visual cortices of living mice using two-photon stimulation while the animals ran on a treadmill. © 1986-2016 The Scientist
Ed Yong At the age of seven, Henry Gustav Molaison was involved in an accident that left him with severe epilepsy. Twenty years later, a surgeon named William Scoville tried to cure him by removing parts of his brain. It worked, but the procedure left Molaison unable to make new long-term memories. Everyone he met, every conversation he had, everything that happened to him would just evaporate from his mind. These problems revolutionized our understanding of how memory works, and transformed Molaison into “Patient H.M.”—arguably the most famous and studied patient in the history of neuroscience. That’s the familiar version of the story, but the one presented in Luke Dittrich’s new book Patient H.M.: A Story of Memory, Madness, and Family Secrets is deeper and darker. As revealed through Dittrich’s extensive reporting and poetic prose, Molaison’s tale is one of ethical dilemmas that not only influenced his famous surgery but persisted well beyond his death in 2008. It’s a story about more than just the life of one man or the root of memory; it’s also about how far people are willing to go for scientific advancement, and the human cost of that progress. And Dittrich is uniquely placed to consider these issues. Scoville was his grandfather. Suzanne Corkin, the scientist who worked with Molaison most extensively after his surgery, was an old friend of his mother’s. I spoke to him about the book and the challenges of reporting a story that he was so deeply entwined in. Most of this interview was conducted on July 19th. Following a New York Times excerpt published on August 7th, and the book’s release two weeks later, many neuroscientists have expressed “outrage” at Dittrich’s portrayal of Corkin. The controversy culminated in a statement from MIT, where Corkin was based, rebutting three allegations in the book. Dittrich has himself responded to the rebuttals, and at the end of this interview, I talk to him about the debate. © 2016 by The Atlantic Monthly Group.
Keyword: Learning & Memory
Link ID: 22552 - Posted: 08.13.2016
Like many students of neuroscience, I first learned of patient HM in a college lecture. His case was so strange yet so illuminating, and I was immediately transfixed. HM was unable to form new memories, my professor explained, because a surgeon had removed a specific part of his brain. The surgery froze him in time. HM—or Henry Molaison, as his name was revealed to be after his death in 2008—might be the most famous patient in the history of brain research. He is now the subject of the new book, Patient HM: A Story of Memory, Madness, and Family Secrets. An excerpt from the book in the New York Times Magazine, which details MIT neuroscientist Sue Corkin’s custody fight over HM’s brain after his death, has since sparked a backlash. Should you wish to go down that particular rabbit hole, you can read MIT’s response, the book author’s response to the response, and summaries of the back and forth. Why HM’s brain was worth fighting over should be obvious; he was probably the most studied individual in neuroscience while alive. But in the seven years since scientists sectioned HM’s brain into 2,401 slices, it has yielded surprisingly little research. Only two papers examining his brain have come out, and so far, physical examinations have led to no major insights. HM’s scientific potential remains unfulfilled—thanks to delays from the custody fight and the limitations of current neuroscience itself. Corkin, who made her career studying HM, confronted her complicated emotions about his death in her own 2013 book. She describes being “ecstatic to see his brain removed expertly from his skull.” Corkin passed away earlier this year.
Keyword: Learning & Memory
Link ID: 22551 - Posted: 08.13.2016
Tim Radford Eight paraplegics – some of them paralysed for more than a decade by severe spinal cord injury – have been able to move their legs and feel sensation, after help from an artificial exoskeleton, sessions using virtual reality (VR) technology and a non-invasive system that links the brain with a computer. In effect, after just 10 months of what their Brazilian medical team call “brain training” they have been able to make a conscious decision to move and then get a response from muscles that have not been used for a decade. Of the octet, one has been able to leave her house and drive a car. Another has conceived and delivered a child, feeling the contractions as she did so. The extent of the improvements was unexpected. The scientists had intended to exploit advanced computing and robotic technology to help paraplegics recover a sense of control in their lives. But their patients recovered some feeling and direct command as well. The implication is that even apparently complete spinal cord injury might leave some connected nerve tissue that could be reawakened after years of inaction. The patients responded unevenly, but all have reported partial restoration of muscle movement or skin sensation. Some have even recovered visceral function and are now able to tell when they need the lavatory. And although none of them can walk unaided, one woman has been able to make walking movements with her legs, while suspended in a harness, and generate enough force to make a robot exoskeleton move. © 2016 Guardian News and Media Limited
By Sharon Begley, The Massachusetts Institute of Technology brain sciences department and, separately, a group of some 200 neuroscientists from around the world have written letters to The New York Times claiming that a book excerpt in the newspaper’s Sunday magazine this week contains important errors, misinterpretations of scientific disputes, and unfair characterizations of an MIT neuroscientist who did groundbreaking research on human memory. In particular, the excerpt contains a 36-volley verbatim exchange between author Luke Dittrich and MIT’s Suzanne Corkin in which she says that key documents from historic experiments were “shredded.” “Most of it has gone, is in the trash, was shredded,” Corkin is quoted as telling Dittrich before she died in May, explaining, “there’s no place to preserve it.” Destroying files related to historic scientific research would raise eyebrows, but Corkin’s colleagues say it never happened. “We believe that no records were destroyed and, to the contrary, that professor Corkin worked in her final days to organize and preserve all records,” said the letter that Dr. James DiCarlo, head of the MIT Department of Brain and Cognitive Sciences, sent to the Times late Tuesday. Even as Corkin fought advanced liver cancer, he wrote, “she instructed her assistant to continue to organize, label, and maintain all records” related to the research, and “the records currently remain within our department.” © 2016 Scientific American
Keyword: Learning & Memory
Link ID: 22546 - Posted: 08.11.2016
By Virginia Morell Fourteen years ago, a bird named Betty stunned scientists with her humanlike ability to invent and use tools. Captured from the wild and shown a tiny basket of meat trapped in a plastic tube, the New Caledonian crow bent a straight piece of wire into a hook and retrieved the food. Researchers hailed the observation as evidence that these crows could invent new tools on the fly—a sign of complex, abstract thought that became regarded as one of the best demonstrations of this ability in an animal other than a human. But a new study casts doubt on at least some of Betty’s supposed intuition. Scientists have long agreed that New Caledonian crows (Corvus moneduloides), which are found only on the South Pacific island of the same name, are accomplished toolmakers. At the time of Betty’s feat, researchers knew that in the wild these crows could shape either stiff or flexible twigs into tools with a tiny, barblike hook at one end, which they used to lever grubs from rotting logs. They also make rakelike tools from the leaves of the screw pine (Pandanus) tree. But Betty appeared to take things to the next level. Not only did she fashion a hook from a material she’d never previously encountered—a behavior not observed in the wild—she seemed to know she needed this specific shape to solve her particular puzzle. © 2016 American Association for the Advancement of Science. A
BENEDICT CAREY As a boy growing up in Massachusetts, Luke Dittrich revered his grandfather, a brain surgeon whose home was full of exotic instruments. Later, he learned that he was not only a prominent doctor but had played a significant role in modern medical history. In 1953, at Hartford Hospital, Dr. William Scoville had removed two slivers of tissue from the brain of a 27-year-old man with severe epilepsy. The operation relieved his seizures but left the patient — Henry Molaison, a motor repairman — unable to form new memories. Known as H. M. to protect his privacy, Mr. Molaison went on to become the most famous patient in the history of neuroscience, participating in hundreds of experiments that have helped researchers understand how the brain registers and stores new experiences. By the time Mr. Dittrich was out of college — and after a year and a half in Egypt, teaching English — he had become fascinated with H. M., brain science and his grandfather’s work. He set out to write a book about the famous case but discovered something unexpected along the way. His grandfather was one of a cadre of top surgeons who had performed lobotomies and other “psycho-surgeries” on thousands of people with mental problems. This was not a story about a single operation that went wrong; it was far larger. The resulting book — “Patient H. M.: A Story of Memory, Madness, and Family Secrets,” to be published Tuesday — describes a dark era of American medicine through a historical, and deeply personal, lens. Why should scientists and the public know this particular story in more detail? The textbook story of Patient H. M. — the story I grew up with — presents the operation my grandfather performed on Henry as a sort of one-off mistake. It was not. Instead, it was the culmination of a long period of human experimentation that my grandfather and other leading doctors and researchers had been conducting in hospitals and asylums around the country. © 2016 The New York Times Company
Keyword: Learning & Memory
Link ID: 22531 - Posted: 08.09.2016
By Julia Shaw Every memory you have ever had is chock-full of errors. I would even go as far as saying that memory is largely an illusion. This is because our perception of the world is deeply imperfect, our brains only bother to remember a tiny piece of what we actually experience, and every time we remember something we have the potential to change the memory we are accessing. I often write about the ways in which our memory leads us astray, with a particular focus on ‘false memories.’ False memories are recollections that feel real but are not based on actual experience. For this particular article I invited a few top memory researchers to comment on what they wish everyone knew about their field. First up, we have Elizabeth Loftus from the University of California, Irvine, who is one of the founders of the area of false memory research, and is considered one of the most ‘eminent psychologists of the 20th century.’ Elizabeth Loftus says you need independent evidence to corroborate your memories. According to Loftus: “The one take home message that I have tried to convey in my writings, and classes, and in my TED talk is this: Just because someone tells you something with a lot of confidence and detail and emotion, it doesn't mean it actually happened. You need independent corroboration to know whether you're dealing with an authentic memory, or something that is a product of some other process.” Next up, we have memory scientist Annelies Vredeveldt from the Vrije Universiteit Amsterdam, who has done fascinating work on how well we remember when we recall things with other people. © 2016 Scientific American,
Keyword: Learning & Memory
Link ID: 22530 - Posted: 08.09.2016
Pete Etchells Mind gamers: How good do you reckon your memory is? We might forget things from time to time, but the stuff we do remember is pretty accurate, right? The trouble is, our memory isn’t as infallible as we might want to believe, and you can test this for yourself using the simple experiment below. All done? Great. Now we’re going to do a simple recognition test – below is another list of words for you to look at. Without looking back, note down which of them appeared in the three lists you just scanned. No cheating! If you said that top, seat and yawn were in the lists, you’re spot on. Likewise, if you think that slow, sweet and strong didn’t appear anywhere, you’re also right. What about chair, mountain and sleep though? They sound like they should have been in the lists, but they never made an appearance. Some of you may have spotted this, but a lot of people tend to say, with a fair amount of certainty, that the words were present. This experiment comes from a classic 1995 study by Henry L. Roediger and Kathleen McDermott at Rice University in Texas. Based on earlier work by James Deese (hence the name Deese-Roediger-McDermott, or DRM, paradigm), participants heard a series of word lists, which they then had to recall from memory. After a brief conversation with the researcher, the participants were then given a new list of words. Critically, this new list contained some words that were associated with every single item on each of the initial lists – for example, while sleep doesn’t appear on list 3 above, it’s related to each word that does appear (bed, rest, tired, and so on). © 2016 Guardian News and Media Limited
Keyword: Learning & Memory
Link ID: 22526 - Posted: 08.08.2016
By LUKE DITTRICH ‘Can you tell me who the president of the United States is at the moment?” A man and a woman sat in an office in the Clinical Research Center at the Massachusetts Institute of Technology. It was 1986, and the man, Henry Molaison, was about to turn 60. He was wearing sweatpants and a checkered shirt and had thick glasses and thick hair. He pondered the question for a moment. “No,” he said. “I can’t.” The woman, Jenni Ogden, was a visiting postdoctoral research fellow from the University of Auckland, in New Zealand. One of the greatest thrills of her time at M.I.T. was the chance to have sit-down sessions with Henry. In her field — neuropsychology — he was a legendary figure, something between a rock star and a saint. “Who’s the last president you remember?” “I don’t. ... ” He paused for a second, mulling over the question. He had a soft, tentative voice, a warm New England accent. “Ike,” he said finally. Dwight D. Eisenhower’s inauguration took place in 1953. Our world had spun around the sun more than 30 times since, though Henry’s world had stayed still, frozen in orbit. This is because 1953 was the year he received an experimental operation, one that destroyed most of several deep-seated structures in his brain, including his hippocampus, his amygdala and his entorhinal cortex. The operation, performed on both sides of his brain and intended to treat Henry’s epilepsy, rendered him profoundly amnesiac, unable to hold on to the present moment for more than 30 seconds or so. That outcome, devastating to Henry, was a boon to science: By 1986, Patient H.M. — as he was called in countless journal articles and textbooks — had become arguably the most important human research subject of all time, revolutionizing our understanding of how memory works. © 2016 The New York Times Company
Keyword: Learning & Memory
Link ID: 22519 - Posted: 08.04.2016
by Helen Thompson Pinky and The Brain's smarts might not be so far-fetched. Some mice are quicker on the uptake than others. While it might not lead to world domination, wits have their upside: a better shot at staying alive. Biologists Audrey Maille and Carsten Schradin of the University of Strasbourg in France tested reaction time and spatial memory in 90 African striped mice (Rhabdomys pumilio) over the course of a summer. For this particular wild rodent, surviving harsh summer droughts means making it to mating season in the early fall. The team saw some overall trends: Females were more likely to survive if they had quick reflexes, and males were more likely to survive if they had good spatial memory. Cognitive traits like reacting quickly and remembering the best places to hide are key to eluding predators during these tough times but may come with trade-offs for males and females. The results show that an individual mouse’s cognitive strengths are linked to its survival odds, suggesting that the pressure to survive can shape basic cognition, Maille and Schradin write August 3 in Biology Letters. |© Society for Science & the Public 2000 - 2016
Meghan Rosen Exercise may not erase old memories, as some studies in animals have previously suggested. Running on an exercise wheel doesn’t make rats forgetprevious trips through an underwater maze, Ashok Shetty and colleagues report August 2 in the Journal of Neuroscience. Exercise or not, four weeks after learning how to find a hidden platform, rats seem to remember the location just fine, the team found. The results conflict with two earlier papers that show that running triggers memory loss in some rodents by boosting the birth of new brain cells. Making new brain cells rejiggers memory circuits, and that can make it hard for animals to remember what they’ve learned, says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto. He has reported this phenomenon in mice, guinea pigs and degus (SN: 6/14/14, p. 7). Maybe rats are the exception, he says, “but I’m not convinced.” In 2014, Frankland and colleagues reported that brain cell genesis clears out fearful memories in three different kinds of rodents. Two years later, Frankland’s team found similar results with spatial memories. After exercising, mice had trouble remembering the location of a hidden platform in a water maze, the team reported in February in Nature Communications. Again, Frankland and colleagues pinned the memory wipeout on brain cell creation — like a chalkboard eraser that brushes away old information. The wipe seemed to clear the way for new memories to form. Shetty, a neuroscientist at Texas A&M Health Science Center in Temple, wondered if the results held true in rats, too. “Rats are quite different from mice,” he says. “Their biology is similar to humans.” |© Society for Science & the Public 2000 - 2016. All rights reserved.
Keyword: Learning & Memory
Link ID: 22510 - Posted: 08.03.2016
By Bahar Gholipour After reflexively reaching out to grab a hot pan falling from the stove, you may be able to withdraw your hand at the very last moment to avoid getting burned. That is because the brain's executive control can step in to break a chain of automatic commands. Several new lines of evidence suggest that the same may be true when it comes to the reflex of recollection—and that the brain can halt the spontaneous retrieval of potentially painful memories. Within the brain, memories sit in a web of interconnected information. As a result, one memory can trigger another, making it bubble up to the surface without any conscious effort. “When you get a reminder, the mind's automatic response is to do you a favor by trying to deliver the thing that's associated with it,” says Michael Anderson, a neuroscientist at the University of Cambridge. “But sometimes we are reminded of things we would rather not think about.” Humans are not helpless against this process, however. Previous imaging studies suggest that the brain's frontal areas can dampen the activity of the hippocampus, a crucial structure for memory, and therefore suppress retrieval. In an effort to learn more, Anderson and his colleagues recently investigated what happens after the hippocampus is suppressed. They asked 381 college students to learn pairs of loosely related words. Later, the students were shown one word and asked to recall the other—or to do the opposite and to actively not think about the other word. Sometimes between these tasks they were shown unusual images, such as a peacock standing in a parking lot. © 2016 Scientific American
Keyword: Learning & Memory
Link ID: 22500 - Posted: 08.01.2016
By Richard Kemeny Sleep is essential for memory. Mounting evidence continues to support the notion that the nocturnal brain replays, stabilizes, reorganizes, and strengthens memories while the body is at rest. Recently, one particular facet of this process has piqued the interest of a growing group of neuroscientists: sleep spindles. For years these brief bursts of brain activity have been largely ignored. Now it seems that examining these neuronal pulses could help researchers better understand—perhaps even treat—cognitive impairments. Sleep spindles are a defining characteristic of stage 2 non-rapid eye movement (NREM) sleep. These electrical bursts between 10-16 Hz last only around a second, and are known to occur in the human brain thousands of times per night. Generated by a thin net of neurons enveloping the thalamus, spindles appear across several regions of the brain, and are thought to perform various functions, including maintaining sleep in the face of disturbances in the environment. It appears they are also a fundamental part of the process by which the human brain consolidates memories during sleep. A memory formed during the day is stored temporarily in the hippocampus, before being spontaneously replayed during the night. Information about the memory is distributed out and integrated into the neocortex through an orchestra of slow-waves, spindles, and rapid hippocampal ripples. Spindles, it seems, could be a guiding force—providing the plasticity and coordination needed for this delicate, interregional transfer of information. © 1986-2016 The Scientist
By Gretchen Reynolds Learning requires more than the acquisition of unfamiliar knowledge; that new information or know-how, if it’s to be more than ephemeral, must be consolidated and securely stored in long-term memory. Mental repetition is one way to do that, of course. But mounting scientific evidence suggests that what we do physically also plays an important role in this process. Sleep, for instance, reinforces memory. And recent experiments show that when mice and rats jog on running wheels after acquiring a new skill, they learn much better than sedentary rodents do. Exercise seems to increase the production of biochemicals in the body and brain related to mental function. Researchers at the Donders Institute for Brain, Cognition and Behavior at Radboud University in the Netherlands and the University of Edinburgh have begun to explore this connection. For a study published this month in Current Biology, 72 healthy adult men and women spent about 40 minutes undergoing a standard test of visual and spatial learning. They observed pictures on a computer screen and then were asked to remember their locations. Afterward, the subjects all watched nature documentaries. Two-thirds of them also exercised: Half were first put through interval training on exercise bicycles for 35 minutes immediately after completing the test; the others did the same workout four hours after the test. Two days later, everyone returned to the lab and repeated the original computerized test while an M.R.I. machine scanned their brain activity. Those who exercised four hours after the test recognized and recreated the picture locations most accurately. Their brain activity was subtly different, too, showing a more consistent pattern of neural activity. The study’s authors suggest that their brains might have been functioning more efficiently because they had learned the patterns so fully. But why delaying exercise for four hours was more effective than an immediate workout remains mysterious. By contrast, rodents do better in many experiments if they work out right after learning. © 2016 The New York Times Company
Keyword: Learning & Memory
Link ID: 22486 - Posted: 07.28.2016
By Sharon Begley, STAT For the first time ever, researchers have managed to reduce people’s risk for dementia — not through a medicine, special diet, or exercise, but by having healthy older adults play a computer-based brain-training game. The training nearly halved the incidence of Alzheimer’s disease and other devastating forms of cognitive and memory loss in older adults a decade after they completed it, scientists reported on Sunday. If the surprising finding holds up, the intervention would be the first of any kind — including drugs, diet, and exercise — to do that. “I think these results are highly, highly promising,” said George Rebok of the Johns Hopkins Bloomberg School of Public Health, an expert on cognitive aging who was not involved in the study. “It’s exciting that this intervention pays dividends so far down the line.” The results, presented at the Alzheimer’s Association International Conference in Toronto, come from the government-funded ACTIVE (Advanced Cognitive Training for Independent and Vital Elderly) study. Starting in 1998, ACTIVE’s 2,832 healthy older adults (average age at the start: 74) received one of three forms of cognitive training, or none, and were evaluated periodically in the years after. In actual numbers, 14 percent of ACTIVE participants who received no training had dementia 10 years later, said psychologist Jerri Edwards of the University of South Florida, who led the study. Among those who completed up to 10 60-to-75-minute sessions of computer-based training in speed-of-processing — basically, how quickly and accurately they can pay attention to, process, and remember brief images on a computer screen — 12.1 percent developed dementia. Of those who completed all 10 initial training sessions plus four booster sessions a few years later, 8.2 percent developed dementia. © 2016 Scientific American
By Tim Page When I returned to California, I brought my diaries into the back yard every afternoon and read them through sequentially, with the hope of learning more about the years before my brain injury. I remembered much of what I’d done professionally, and whatever additional information I needed could usually be found on my constantly vandalized Wikipedia page. Here was the story of an awkward, imperious child prodigy who made his own films and became famous much too early; a music explainer who won a Pulitzer Prize; a driven and obsessive loner whose fascinations led to collaborations with Glenn Gould, Philip Glass and Thomas Pynchon. In 2000, at age 45, I was diagnosed with Asperger’s syndrome. In retrospect, the only surprise is that it took so long. But the diaries offered a more intimate view. Reading them was slow going, and I felt as though my nose was pressed up against the windowpane of my own life. The shaggy-dog accretion of material — phone numbers, long-ago concert dates, coded references to secret loves — all seemed to belong to somebody else. My last clear memory was of a muggy, quiet Sunday morning in July, three months earlier, as I waited for a train in New London, Conn. It was 11:13 a.m., and the train was due to arrive two minutes later. I was contented, proud of my punctuality and expecting an easy ride to New York in the designated “quiet car,” with just enough time to finish whatever book I was carrying. There would be dinner in Midtown with a magical friend, followed by overnight family visits in Baltimore and Washington, and then a flight back to Los Angeles and the University of Southern California, at which point a sabbatical semester would be at an end.