Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Lawrence Berger A cognitive scientist and a German philosopher walk into the woods and come upon a tree in bloom: What does each one see? And why does it matter? While that may sound like the set-up to a joke making the rounds at a philosophy conference, I pose it here sincerely, as a way to explore the implications of two distinct strains of thought — that of cognitive science and that of phenomenology, in particular, the thought of Martin Heidegger, who offers a most compelling vision of the ultimate significance of our being here, and what it means to be fully human. When we feel that someone is really listening to us, we feel more alive, we feel our true selves coming to the surface — this is the sense in which worldly presence matters. It can be argued that cognitive scientists tend to ignore the importance of what many consider to be essential features of human existence, preferring to see us as information processors rather than full-blooded human beings immersed in worlds of significance. In general, their intent is to explain human activity and life as we experience it on the basis of physical and physiological processes, the implicit assumption being that this is the domain of what is ultimately real. Since virtually everything that matters to us as human beings can be traced back to life as it is experienced, such thinking is bound to be unsettling. For instance, an article in The Times last year by Michael S. A. Graziano, a professor of psychology and neuroscience at Princeton, about whether we humans are “really conscious,” argued, among other things, that “we don’t actually have inner feelings in the way most of us think we do.” © 2015 The New York Times Company
by Michael Slezak What were we talking about? Oh yes, brain-training programmes may be useful for helping inattentive people focus on tasks in their daily life. At least, that's the implication of an analysis looking at one particular programme. It's the latest salvo in a field that has seen the battles lines drawn between those who believe there is no compelling scientific evidence that training the brain to do a specific task better can offer wider cognitive improvements, and those that think it can work in some cases. The party line is that brain training improves only that which it exercises, says Jared Horvath from the University of Melbourne in Australia. "What this means is, if the training programme uses a working memory game, you get better at working memory games and little else." But an analysis by Megan Spencer-Smith of Monash University in Melbourne, Australia, and Torkel Klingberg at the Karolinska Institute in Stockholm, Sweden, claims to show that there are benefits for daily life – at least for people with attention deficit hyperactivity disorder or other problems related to attentiveness. They focused on a programme called Cogmed, which Klingberg has helped develop, and combined the results of several smaller studies. Cogmed is designed to improve how much verbal or visual information you can temporarily remember and work with. © Copyright Reed Business Information Ltd.
By Nicholas Weiler Where did the thief go? You might get a more accurate answer if you ask the question in German. How did she get away? Now you might want to switch to English. Speakers of the two languages put different emphasis on actions and their consequences, influencing the way they think about the world, according to a new study. The work also finds that bilinguals may get the best of both worldviews, as their thinking can be more flexible. Cognitive scientists have debated whether your native language shapes how you think since the 1940s. The idea has seen a revival in recent decades, as a growing number of studies suggested that language can prompt speakers to pay attention to certain features of the world. Russian speakers are faster to distinguish shades of blue than English speakers, for example. And Japanese speakers tend to group objects by material rather than shape, whereas Koreans focus on how tightly objects fit together. Still, skeptics argue that such results are laboratory artifacts, or at best reflect cultural differences between speakers that are unrelated to language. In the new study, researchers turned to people who speak multiple languages. By studying bilinguals, “we’re taking that classic debate and turning it on its head,” says psycholinguist Panos Athanasopoulos of Lancaster University in the United Kingdom. Rather than ask whether speakers of different languages have different minds, he says, “we ask, ‘Can two different minds exist within one person?’ ” Athanasopoulos and colleagues were interested in a particular difference in how English and German speakers treat events. © 2015 American Association for the Advancement of Science
Brian Owens Our choice between two moral options might be swayed by tracking our gaze, and asking for a decision at the right moment. People asked to choose between two written moral statements tend to glance more often towards the option they favour, experimental psychologists say. More surprisingly, the scientists also claim it’s possible to influence a moral choice: asking for an immediate decision as soon as someone happens to gaze at one statement primes them to choose that option. It’s well known that people tend to look more towards the option they are going to choose when they are choosing food from a menu, says Philip Pärnamets, a cognitive scientist from Lund University in Sweden. He wanted to see if that applied to moral reasoning as well. “Moral decisions have long been considered separately from general decision-making,” he says. “I wanted to integrate them.” In a paper published today in the Proceedings of the National Academy of Sciences1, Pärnamets and his colleagues explain how they presented volunteers with a series of moral statements, such as 'murder is sometimes justified,' 'masturbating with the aid of a willing animal is acceptable' and 'paying taxes is a good thing.' Then the psychologists tracked the volunteers’ gaze as two options appeared on a screen. Once the tracker had determined that a person had spent at least 750 milliseconds looking at one answer and 250 milliseconds at the other, the screen changed to prompt them to make a decision. Almost 60% of the time, they chose the most viewed option — indicating, says Pärnamets, that eye gaze tracks an unfolding moral decision. © 2015 Nature Publishing Group,
|By Esther Landhuis As we age, we seem to get worse at ignoring irrelevant stimuli. It's what makes restaurant conversations challenging—having to converse while also shutting out surrounding chatter. New research bears out the aging brain's distractibility but also suggests that training may help us tune out interference. Scientists at Brown University recruited seniors and twentysomethings for a visual experiment. Presented with a sequence of letters and numbers, participants were asked to report back only the numbers—all the while disregarding a series of meaningless dots. Sometimes the dots moved randomly, but other times they traveled in a clear direction, making them harder to ignore. Older participants ended up accidentally learning the dots' patterns, based on the accuracy of their answers when asked which way the dots were moving, whereas young adults seemed able to suppress that information and focus on the numbers, the researchers reported last November in Current Biology. In a separate study published in Neuron, scientists at the University of California, San Francisco, showed they could train aging brains to become less distractible. Their regimen helped aging rats as well as older people. The researchers played three different sounds and rewarded trainees for identifying a target tone while ignoring distracter frequencies. As the subjects improved, the task grew more challenging—the distracting tone became harder to discriminate from the target. © 2015 Scientific American,
Robin Tricoles The first time it happened, I was 8. I was tucked in bed reading my favorite book when my tongue swelled up to the size of a cow’s, like the giant tongues I had seen in the glass display case at the neighborhood deli. At the same time, the far wall of my bedroom began to recede, becoming a tiny white rectangle floating somewhere in the distance. In the book I was holding, the typeface grew vast on the page. I was intrigued, I remember, but not afraid. Over the next six years, the same thing happened to me dozens of times. Forty years later, while working as a science writer, I stumbled on a scientific paper describing almost exactly what I had experienced. The paper attributed those otherworldly sensations to something called Alice in Wonderland syndrome, or its close cousin, Alice in Wonderland-like syndrome. People with Alice in Wonderland syndrome (AWS) perceive parts of their body to be changing size. For example, their feet may suddenly appear smaller and more distant, or their hands larger than they had been moments before. Those with the closely related Alice in Wonderland-like syndrome (AWLS) misperceive the size and distance of objects, seeing them as startlingly larger, smaller, fatter, or thinner than their natural state. People who experience both sensations, like I did, are classified as having AWLS. The syndrome’s name is commonly attributed to English psychiatrist John Todd, who in 1955 described his adult patients’ illusions of corporal and objective distortions in a paper in the Canadian Medical Association Journal. © 2015 by The Atlantic Monthly Group.
Link ID: 20672 - Posted: 03.10.2015
By TIMOTHY WILLIAMS In January 1972, Cecil Clayton was cutting wood at his family’s sawmill in southeastern Missouri when a piece of lumber flew off the circular saw blade and struck him in the forehead. The impact caved in part of Mr. Clayton’s skull, driving bone fragments into his brain. Doctors saved his life, but in doing so had to remove 20 percent of his frontal lobe, which psychiatrists say led Mr. Clayton to be tormented for years by violent impulses, schizophrenia and extreme paranoia. In 1996, his lawyers say, those impulses drove Mr. Clayton to kill a law enforcement officer. Today, as Mr. Clayton, 74, sits on death row, his lawyers have returned to that 1972 sawmill accident in a last-ditch effort to save his life, arguing that Missouri’s death penalty law prohibits the execution of severely brain-damaged people. Lawyers for Mr. Clayton, who has an I.Q. of 71, say he should be spared because his injury has made it impossible for him to grasp the significance of his death sentence, scheduled for March 17. “There was a profound change in him that he doesn’t understand, and neither did his family,” said Elizabeth Unger Carlyle, one of Mr. Clayton’s lawyers. While several rulings by the United States Supreme Court in recent years have narrowed the criteria for executing people who have a mental illness, states continue to hold wide sway in establishing who is mentally ill. The debate surrounding Mr. Clayton involves just how profoundly his impairment has affected his ability to understand what is happening to him. Mr. Clayton is missing about 7.7 percent of his brain. © 2015 The New York Times Company
|By Christof Koch In the Dutch countryside, a tall, older man, dressed in a maroon sports coat, his back slightly stooped, stands out because of his height and a pair of extraordinarily bushy eyebrows. His words, inflected by a British accent, are directed at a middle-aged man with long, curly brown hair, penetrating eyes and a dark, scholarly gown, who talks in only a halting English that reveals his native French origins. Their strangely clashing styles of speaking and mismatched clothes do not seem to matter to them as they press forward, with Eyebrows peering down intently at the Scholar. There is something distinctly odd about the entire meeting—a crossing of time, place and disciplines. Eyebrows: So I finally meet the man who doubts everything. The Scholar: (not missing a beat) At this time, I admit nothing that is not necessarily true. I'm famous for that! Eyebrows: Is there anything that you are certain of? (sotto voce) Besides your own fame? The Scholar: (evading the sarcastic jibe) I can't be certain of my fame. Indeed, I can't even be certain that there is a world out there, for I could be dreaming or hallucinating it. I can't be certain about the existence of my own body, its shape and extension, its corporality, for again I might be fooling myself. But now what am I, when I suppose that there is some supremely powerful and, if I may be permitted to say so, malicious deceiver who deliberately tries to fool me in any way he can? Given this evil spirit, how do I know that my sensations about the outside world—that is, it looks, feels and smells in a particular way—are not illusions, conjured up by Him to deceive me? It seems to me that therefore I can never know anything truly about the world. Nothing, rien du tout. I have to doubt everything. © 2015 Scientific American
Link ID: 20640 - Posted: 03.03.2015
By Neuroskeptic In an interesting short paper just published in Trends in Cognitive Science, Caltech neuroscientist Ralph Adolphs offers his thoughts on The Unsolved Problems of Neuroscience. Here’s Adolphs’ list of the top 23 questions (including 3 “meta” issues), which, he says, was inspired by Hilbert’s famous set of 23 mathematical problems: Problems that are solved, or soon will be: I. How do single neurons compute? II. What is the connectome of a small nervous system, like that of Caenorhabitis elegans (300 neurons)? III. How can we image a live brain of 100,000 neurons at cellular and millisecond resolution? IV. How does sensory transduction work? Problems that we should be able to solve in the next 50 years: V. How do circuits of neurons compute? VI. What is the complete connectome of the mouse brain (70,000,000 neurons)? VII. How can we image a live mouse brain at cellular and millisecond resolution? VIII. What causes psychiatric and neurological illness? IX. How do learning and memory work? X. Why do we sleep and dream? XI. How do we make decisions? XII. How does the brain represent abstract ideas? Problems that we should be able to solve, but who knows when: XIII. How does the mouse brain compute? XIV. What is the complete connectome of the human brain (80,000,000,000 neurons)? XV. How can we image a live human brain at cellular and millisecond resolution? XVI. How could we cure psychiatric and neurological diseases? XVII. How could we make everybody’s brain function best? Problems we may never solve: XVIII. How does the human brain compute? XIX. How can cognition be so flexible and generative? XX. How and why does conscious experience arise? Meta-questions: XXI. What counts as an explanation of how the brain works? (and which disciplines would be needed to provide it?) XXII. How could we build a brain? (how do evolution and development do it?) XXIII. What are the different ways of understanding the brain? (what is function, algorithm, implementation?) Adolphs R (2015). The unsolved problems of neuroscience. Trends in cognitive sciences PMID: 25703689
Link ID: 20637 - Posted: 03.02.2015
By Adam Rogers The fact that a single image could polarize the entire Internet into two aggressive camps is, let’s face it, just another Thursday. But for the past half-day, people across social media have been arguing about whether a picture depicts a perfectly nice bodycon dress as blue with black lace fringe or white with gold lace fringe. And neither side will budge. This fight is about more than just social media—it’s about primal biology and the way human eyes and brains have evolved to see color in a sunlit world. Light enters the eye through the lens—different wavelengths corresponding to different colors. The light hits the retina in the back of the eye where pigments fire up neural connections to the visual cortex, the part of the brain that processes those signals into an image. Critically, though, that first burst of light is made of whatever wavelengths are illuminating the world, reflecting off whatever you’re looking at. Without you having to worry about it, your brain figures out what color light is bouncing off the thing your eyes are looking at, and essentially subtracts that color from the “real” color of the object. “Our visual system is supposed to throw away information about the illuminant and extract information about the actual reflectance,” says Jay Neitz, a neuroscientist at the University of Washington. “But I’ve studied individual differences in color vision for 30 years, and this is one of the biggest individual differences I’ve ever seen.” (Neitz sees white-and-gold.) Usually that system works just fine. This image, though, hits some kind of perceptual boundary. That might be because of how people are wired. Human beings evolved to see in daylight, but daylight changes color. WIRED.com © 2015 Condé Nast
by Helen Thomson We meet in a pub, we have a few drinks, some dinner and then you lean in for a kiss. You predict, based on our previous interactions, that the kiss will be reciprocated – rather than landing you with a slap in the face. All our social interactions require us to anticipate another person's undecided intentions and actions. Now, researchers have discovered specific brain cells that allow monkeys to do this. It is likely that the cells do the same job in humans. Keren Haroush and Ziv Williams at Harvard Medical School trained monkeys to play a version of the prisoner's dilemma, a game used to study cooperation. The monkeys sat next to each other and decided whether or not they wanted to cooperate with their companion, by moving a joystick to pick either option. Moving the joystick towards an orange circle meant cooperate, a blue triangle meant "not this time". Neither monkey could see the other's face, or receive any clues about their planned action. If the monkeys cooperated, both received four drops of juice. If one cooperated and the other decided not to, the one who cooperated received one drop, and the other received six drops of juice. If both declined to work together they both received two drops of juice. Once both had made their selections, they could see what the other monkey had chosen and hear the amount of juice their companion was enjoying. © Copyright Reed Business Information Ltd.
Link ID: 20627 - Posted: 02.27.2015
Elizabeth Gibney DeepMind, the Google-owned artificial-intelligence company, has revealed how it created a single computer algorithm that can learn how to play 49 different arcade games, including the 1970s classics Pong and Space Invaders. In more than half of those games, the computer became skilled enough to beat a professional human player. The algorithm — which has generated a buzz since publication of a preliminary version in 2013 (V. Mnih et al. Preprint at http://arxiv.org/abs/1312.5602; 2013) — is the first artificial-intelligence (AI) system that can learn a variety of tasks from scratch given only the same, minimal starting information. “The fact that you have one system that can learn several games, without any tweaking from game to game, is surprising and pretty impressive,” says Nathan Sprague, a machine-learning scientist at James Madison University in Harrisonburg, Virginia. DeepMind, which is based in London, says that the brain-inspired system could also provide insights into human intelligence. “Neuroscientists are studying intelligence and decision-making, and here’s a very clean test bed for those ideas,” says Demis Hassabis, co-founder of DeepMind. He and his colleagues describe the gaming algorithm in a paper published this week (V. Mnih et al. Nature 518, 529–533; 2015. Games are to AI researchers what fruit flies are to biology — a stripped-back system in which to test theories, says Richard Sutton, a computer scientist who studies reinforcement learning at the University of Alberta in Edmonton, Canada. “Understanding the mind is an incredibly difficult problem, but games allow you to break it down into parts that you can study,” he says. But so far, most human-beating computers — such as IBM’s Deep Blue, which beat chess world champion Garry Kasparov in 1997, and the recently unveiled algorithm that plays Texas Hold ’Em poker essentially perfectly (see Nature http://doi.org/2dw; 2015)—excel at only one game. © 2015 Nature Publishing Group
People with attention deficit hyperactivity disorder are about twice as likely to die prematurely as those without the disorder, say researchers. Researchers followed 1.92 million Danes, including 32,000 with ADHD, from birth through to 2013. "In this nationwide prospective cohort study with up to 32-year followup, children, adolescents and adults with ADHD had decreased life expectancy and more than double the risk of death compared with people without ADHD," Soren Dalsgaard, from Aarhus University in Denmark, and his co-authors concluded in Wednesday's online issue of Lancet. Actress Kirstie Alley holds a picture of Raymond Perone while testifying in favour of a bill designed to curb the over-prescribing of psychotropic drugs. Danish researchers studying ADHD say medications can reduce symptoms of inattention and impulsivity. (Phil Coale/Associated Press) "People diagnosed with ADHD in adulthood had a greater risk of death than did those diagnosed in childhood and adolescence. This finding could be caused by persistent ADHD being a more severe form of the disorder." Of the 107 individuals with ADHD who died, information on cause of death was available for 79. Of those, 25 died from natural causes and 54 from unnatural causes, including 42 from accidents. Being diagnosed with ADHD along with oppositional defiant disorder, conduct disorder and substance use disorder also increased the risk of death, the researchers found. Mortality risk was also higher for females than males, which led Dalsgaard to stress the need for early diagnosis, especially in girls and women, and to treat co-existing disorders. Although talk of premature death will worry parents and patients, they can seek solace in knowing the absolute risk of premature death at an individual level is low and can be greatly reduced with treatment, Stephen Faraone, a professor of psychiatry and director of child and adolescent psychiatry research at SUNY Upstate Medical University in New York, said in a journal commentary published with the study. ©2015 CBC/Radio-Canada.
|By Matthew Hutson We like to think of our moral judgments as consistent, but they can be as capricious as moods. Research reveals that such judgments are swayed by incidental emotions and perceptions—for instance, people become more moralistic when they feel dirty or sense contamination, such as in the presence of moldy food. Now a series of studies shows that hippies, the obese and “trailer trash” suffer prejudicial treatment because they tend to elicit disgust. Researchers asked volunteers to read short paragraphs about people committing what many consider to be impure acts, such as watching pornography, swearing or being messy. Some of the paragraphs described the individuals as being a hippie, obese or trailer trash—and the volunteers judged these fictional sinners more harshly, according to the paper in the Journal of Experimental Psychology: General. Questionnaires revealed that feelings of disgust toward these groups were driving the volunteers' assessments. A series of follow-up studies solidified the link, finding that these groups also garnered greater praise for purity-related virtues, such as keeping a neat cubicle. If the transgression in question did not involve purity, such as not tipping a waiter, the difference in judgment disappeared. “The assumption people have is that we draw on values that are universal and important,” says social psychologist E. J. Masicampo of Wake Forest University, who led the study, “but something like mentioning that a person is overweight can really push that judgment around. It's triggering these gut-level emotions.” The researchers also looked for real-world effects. © 2015 Scientific American
By Christie Aschwanden Paul Offit likes to tell a story about how his wife, pediatrician Bonnie Offit, was about to give a child a vaccination when the kid was struck by a seizure. Had she given the injection a minute sooner, Paul Offit says, it would surely have appeared as though the vaccine had caused the seizure and probably no study in the world would have convinced the parent otherwise. (The Offits have such studies at the ready — Paul is the director of the Vaccine Education Center at the Children’s Hospital of Philadelphia and author of “Deadly Choices: How the Anti-Vaccine Movement Threatens Us All.”) Indeed, famous anti-vaxxer Jenny McCarthy has said her son’s autism and seizures are linked to “so many shots” because vaccinations preceded his symptoms. But, as Offit’s story suggests, the fact that a child became sick after a vaccine is not strong evidence that the immunization was to blame. Psychologists have a name for the cognitive bias that makes us prone to assigning a causal relationship to two events simply because they happened one after the other: the “illusion of causality.” A study recently published in the British Journal of Psychology investigates how this illusion influences the way we process new information. Its finding: Causal illusions don’t just cement erroneous ideas in the mind; they can also prevent new information from correcting them. Helena Matute, a psychologist at Deusto University in Bilbao, Spain, and her colleagues enlisted 147 college students to take part in a computer-based task in which they each played a doctor who specializes in a fictitious rare disease and assessed whether new medications could cure it. ©2015 ESPN Internet Ventures.
Tom Stafford Trusting your instincts may help you to make better decisions than thinking hard, a study suggests. It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. "What shall I have for lunch?", I ask myself. Or I think, "I wonder why she did that?" and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong. There's an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn't consciously prepare, but which was delivered for my use by the unconscious. The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, 'Is the unconscious smart or dumb?' One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought. A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it.
Link ID: 20594 - Posted: 02.19.2015
Carl Zimmer In 2010, a graduate student named Tamar Gefen got to know a remarkable group of older people. They had volunteered for a study of memory at the Feinberg School of Medicine at Northwestern University. Although they were all over age 80, Ms. Gefen and her colleagues found that they scored as well on memory tests as people in their 50s. Some complained that they remembered too much. She and her colleagues referred to them as SuperAgers. Many were also friends. “A couple tried to set me up with their grandsons,” Ms. Gefen said. She was impressed by their resilience and humor: “It takes wisdom to a whole new level.” Recently, Ms. Gefen’s research has taken a sharp turn. At the outset of the study, the volunteers agreed to donate their brains for medical research. Some of them have died, and it has been Ms. Gefen’s job to look for anatomical clues to their extraordinary minds. “I had this enormous privilege I can’t even begin to describe,” she said. “I knew them and tested them in life and in death. At the end, I was the one looking at them through a microscope.” Ms. Gefen and her colleagues are now starting to publish the results of these post-mortem studies. Last month in The Journal of Neuroscience, the scientists reported that one of the biggest differences involves peculiar, oversize brain cells known as von Economo neurons. SuperAgers have almost five times as many of them as other people. Learning what makes these brains special could help point researchers to treatments for Alzheimer’s disease and other kinds of mental decline. But it is hard to say how an abundance of von Economo neurons actually helps the brain. © 2015 The New York Times Company
By Virginia Morell To prevent their hives from being attacked by invaders, wasps must quickly distinguish friend from foe. They typically do this by sniffing out foreigners, as outsiders tend to have a different scent than the home colony. Now researchers have discovered that, like a few other wasp species, a tiny social wasp (Liostenogaster flavolineata) from Malaysia employs an additional security measure: facial recognition. The wasps’ nests are typically found in large aggregations with as many as 150 built close together, and each colony faces persistent landing attempts by outsiders from these other nests. To find out why and how these wasps employ both vision and scent to determine if an incoming wasp is a comrade, scientists carried out a series of experiments on 50 colonies (see photo above) in the wild. Close to the nests, the researchers dangled lures made of captured and killed wasps. The lures had been given different treatments. For instance, some lures made from nest mates were coated with a foe’s scent, whereas outsiders were painted with the colony’s odor. The wasps, it turns out, pay more attention to facial markings than to scent when faced with a possible intruder, the team reports online today in the Proceedings of the Royal Society B. Indeed, in tests where the wasps could assess both an intruder’s face and scent, they relied solely on facial recognition and immediately attacked those whose faces they didn’t know, ignoring their odor. That’s the safest strategy, the scientists note, because the wasps can recognize another’s face at a distance, but need to actually touch another wasp to detect her scent—not a bad ploy for a tiny-brained insect. © 2015 American Association for the Advancement of Science
Link ID: 20547 - Posted: 02.05.2015
By Katherine Ellison Dr. Mark Bertin is no A.D.H.D. pill-pusher. The Pleasantville, N.Y., developmental pediatrician won’t allow drug marketers in his office, and says he doesn’t always prescribe medication for children diagnosed with attention deficit hyperactivity disorder. Yet Dr. Bertin has recently changed the way he talks about medication, offering parents a powerful argument. Recent research, he says, suggests the pills may “normalize” the child’s brain over time, rewiring neural connections so that a child would feel more focused and in control, long after the last pill was taken. “There might be quite a profound neurological benefit,” he said in an interview. A growing number of doctors who treat the estimated 6.4 million American children diagnosed with A.D.H.D. are hearing that stimulant medications not only help treat the disorder but may actually be good for their patients’ brains. In an interview last spring with Psych Congress Network, http://www.psychcongress.com/video/are-A.D.H.D.-medications-neurotoxic-or-neuroprotective-16223an Internet news site for mental health professionals, Dr. Timothy Wilens, chief of child and adolescent psychiatry at Massachusetts General Hospital, said “we have enough data to say they’re actually neuroprotective.” The pills, he said, help “normalize” the function and structure of brains in children with A.D.H.D., so that, “over years, they turn out to look more like non-A.D.H.D. kids.” Medication is already by far the most common treatment for A.D.H.D., with roughly 4 million American children taking the pills — mostly stimulants, such as amphetamines and methylphenidate. Yet the decision can be anguishing for parents who worry about both short-term and long-term side effects. If the pills can truly produce long-lasting benefits, more parents might be encouraged to start their children on these medications early and continue them for longer. Leading A.D.H.D. experts, however, warn the jury is still out. © 2015 The New York Times Company
|By Esther Landhuis One in nine Americans aged 65 and older has Alzheimer's disease, a fatal brain disorder with no cure or effective treatment. Therapy could come in the form of new drugs, but some experts suspect drug trials have failed so far because compounds were tested too late in the disease's progression. By the time people show signs of dementia, their brains have lost neurons. No therapy can revive dead cells, and little can be done to create new ones. So researchers running trials now seek participants who still pass as cognitively normal but are on the verge of decline. These “preclinical” Alzheimer's patients may represent a window of opportunity for therapeutic intervention. How to identify such individuals before they have symptoms presents a challenge, however. Today most Alzheimer's patients are diagnosed after a detailed medical workup and extensive tests that gauge mental function. Other tests, such as spinal fluid analyses and positron-emission tomography (PET) scans, can detect signs of approaching disease and help pinpoint the preclinical window but are cumbersome or expensive. “There's no cheap, fast, noninvasive test that can identify people at risk of Alzheimer's,” says Brad Dolin, chief technology officer of Neurotrack in Palo Alto, Calif.—a company developing a computerized visual screening test for Alzheimer's. Unlike other cognitive batteries, the Neurotrack test requires no language or motor skills. Participants view images on a monitor while a camera tracks their eye movements. The test draws on research by co-founder Stuart Zola of Emory University, who studies learning and memory in monkeys. When presented with a pair of images—one novel, the other familiar—primates fixate longer on the novel one. But if the hippocampus is damaged, as it is in people with Alzheimer's, the subject does not show a clear preference for the novel images. © 2015 Scientific American