Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Inner-ear problems could be a cause of hyperactive behaviour, research suggests. A study on mice, published in Science, said such problems caused changes in the brain that led to hyperactivity. It could lead to the development of new targets for behaviour disorder treatments, the US team says. A UK expert said the study's findings were "intriguing" and should be investigated further. Behavioural problems such as ADHD are usually thought to originate in the brain. But scientists have observed that children and teenagers with inner-ear disorders - especially those that affect hearing and balance - often have behavioural problems. However, no causal link has been found. The researchers in this study suggest inner-ear disorders lead to problems in the brain which then also affect behaviour. The team from the Albert Einstein College of Medicine of Yeshiva University in New York noticed some mice in the lab were particularly active - constantly chasing their tails. They were found to be profoundly deaf and have disorders of the inner ear - of both the cochlea, which is responsible for hearing, and the vestibular system, which is responsible for balance. The researchers found a mutation in the Slc12a2 gene, also found in humans. Blocking the gene's activity in the inner ears of healthy mice caused them to become increasingly active. BBC © 2013
by Adam Gopnik Good myths turn on simple pairs— God and Lucifer, Sun and Moon, Jerry and George—and so an author who makes a vital duo is rewarded with a long-lived audience. No one in 1900 would have thought it possible that a century later more people would read Conan Doyle’s Holmes and Watson stories than anything of George Meredith’s, but we do. And so Gene Roddenberry’s “Star Trek,” despite the silly plots and the cardboard-seeming sets, persists in its many versions because it captures a deep and abiding divide. Mr. Spock speaks for the rational, analytic self who assumes that the mind is a mechanism and that everything it does is logical, Captain Kirk for the belief that what governs our life is not only irrational but inexplicable, and the better for being so. The division has had new energy in our time: we care most about a person who is like a thinking machine at a moment when we have begun to have machines that think. Captain Kirk, meanwhile, is not only a Romantic, like so many other heroes, but a Romantic on a starship in a vacuum in deep space. When your entire body is every day dissolved, reënergized, and sent down to a new planet, and you still believe in the ineffable human spirit, you have really earned the right to be a soul man. Writers on the brain and the mind tend to divide into Spocks and Kirks, either embracing the idea that consciousness can be located in a web of brain tissue or debunking it. For the past decade, at least, the Spocks have been running the Enterprise: there are books on your brain and music, books on your brain and storytelling, books that tell you why your brain makes you want to join the Army, and books that explain why you wish that Bar Refaeli were in the barracks with you. The neurological turn has become what the “cultural” turn was a few decades ago: the all-purpose non-explanation explanation of everything. Thirty years ago, you could feel loftily significant by attaching the word “culture” to anything you wanted to inspect: we didn’t live in a violent country, we lived in a “culture of violence”; we didn’t have sharp political differences, we lived in a “culture of complaint”; and so on. In those days, Time, taking up the American pursuit of pleasure, praised Christopher Lasch’s “The Culture of Narcissism”; now Time has a cover story on happiness and asks whether we are “hardwired” to pursue it. © 2013 Condé Nast.
by Bob Holmes It's the cruel cycle of poverty. The many challenges that come with being poor can sap people's ability to think clearly, according to a new study. The findings suggest that governments should think twice before tying up social-assistance programmes in confusing red tape. Sociologists have long known that poor people are less likely to take medications, keep appointments, or be attentive parents. "Poor people make poorer decisions. They do. The question is why," says Timothy Smeeding, director of the Institute for Research on Poverty at the University of Wisconsin-Madison. But does bad decision-making help cause poverty, or does poverty interfere with decision-making? To explore this question, psychologist Eldar Shafir at Princeton University and his colleagues took advantage of a natural experiment. Small-scale sugar-cane farmers in Tamil Nadu in southern India receive most of their year's income all at once, shortly after the annual harvest. As a result, the same farmer can be poor before harvest and relatively rich after. And indeed, Shafir's team found that farmers had more loans, pawned more belongings, and reported more difficulty paying bills before the harvest than after. The researchers visited 464 farmers in 54 villages both before and after harvest. At each visit, they gave the farmers two tests of their cognitive ability: a multiple-choice pattern-matching test, and one in which they had to declare the number of digits shown rather then their value: seeing "5 5 5" but saying "three", for example. © Copyright Reed Business Information Ltd.
Brain scans of people who say they have insomnia have shown differences in brain function compared with people who get a full night's sleep. Researchers at the University of California, San Diego, said the poor sleepers struggled to focus part of their brain in memory tests. Other experts said that the brain's wiring may actually be affecting perceptions of sleep quality. The findings were published in the journal Sleep. People with insomnia struggle to sleep at night, but it also has consequences during the day such as delayed reaction times and memory. The study compared 25 people who said they had insomnia with 25 who described themselves as good sleepers. MRI brain scans were carried out while they performed increasingly challenging memory tests. One of the researchers, Prof Sean Drummond, said: "We found that insomnia subjects did not properly turn on brain regions critical to a working memory task and did not turn off 'mind-wandering' brain regions irrelevant to the task. "This data helps us understand that people with insomnia not only have trouble sleeping at night, but their brains are not functioning as efficiently during the day." BBC © 2013
by Colin Barras Familiarity may breed contempt, and it also makes it easier to ignore our nearest and dearest. The human brain has an uncanny ability to focus on one voice in a sea of chatterSpeaker, for example, at a party, but exactly how it does so is still up for debate. "In the past, people have looked at the acoustic characteristics that enable the brain to do this," says Ingrid Johnsrude at Queen's University in Kingston, Ontario, Canada. "Things like differences in voice pitch or its timbre." Johnsrude and her colleagues wondered if the familiarity of the voice also plays a role. Can people focus on one voice in a crowd more effectively if it belongs to a close relation? And is a familiar voice more easily ignored if we want to listen to someone else? To find out, the team recruited 23 married couples. Each had been married and living together for at least 18 years. Individuals were played two sentences simultaneously and asked to report back details about one of them, such as the colour and number mentioned. They did this correctly 80 per cent of the time when their spouse spoke the target sentence and a stranger spoke the decoy sentence. If strangers spoke both, the success rate dropped to 65 per cent. © Copyright Reed Business Information Ltd
Elizabeth Norton Brain cells, like Henry Higgins in My Fair Lady, grow accustomed to a familiar face—so much so that repeatedly viewing a distorted face will make the normal face look odd. This process, known as visual adaptation, is enhanced by sleep and may be an essential component of memory, a new study finds. After multiple exposures to a striking visual pattern, neurons in the retina and visual cortex of the brain fire less frequently the next time you see the pattern. By devoting less energy to familiar sights, the brain is free to concentrate on the next new thing that comes along; the original image becomes a routine perception. Scientists think that this allocation of mental resources is crucial to our ability to perceive and interpret our surroundings. Whether visual adaptation is a prelude to memory formation is another question, one that intrigued cognitive neuroscientist Thomas Ditye of University College London. Because sleep strengthens memory, Ditye and colleagues decided to test whether visual adaptation also improves after some shuteye. The researchers asked a group to view a computer screen on which distorted images of the faces of actors George Clooney and Angelina Jolie flashed for periods of 0.5 to 6 seconds. The images were “extended”—stretched until they achieved the blown-up look of a fun house mirror. The object of the test was to determine whether the brain would adapt to images and begin seeing the distorted faces as normal. The volunteers, however, believing their reaction time was being tested, merely pressed a button whenever they saw the image. © 2012 American Association for the Advancement of Science
Link ID: 18576 - Posted: 08.28.2013
Piercarlo Valdesolo Public opinion towards science has made headlines over the past several years for a variety of reasons — mostly negative. High profile cases of academic dishonesty and disputes over funding have left many questioning the integrity and societal value of basic science, while accusations of politically motivated research fly from left and right. There is little doubt that science is value-laden. Allegiances to theories and ideologies can skew the kinds of hypotheses tested and the methods used to test them. These, however, are errors in the application of the method, not the method itself. In other words, it’s possible that public opinion towards science more generally might be relatively unaffected by the misdeeds and biases of individual scientists. In fact, given the undeniable benefits scientific progress yielded, associations with the process of scientific inquiry may be quite positive. Researchers at the University of California Santa Barbara set out to test this possibility. They hypothesized that there is a deep-seated perception of science as a moral pursuit — its emphasis on truth-seeking, impartiality and rationality privileges collective well-being above all else. Their new study, published in the journal PLOSOne, argues that the association between science and morality is so ingrained that merely thinking about it can trigger more moral behavior. The researchers conducted four separate studies to test this. The first sought to establish a simple correlation between the degree to which individuals believed in science and their likelihood of enforcing moral norms when presented with a hypothetical violation. Participants read a vignette of a date-rape and were asked to rate the “wrongness” of the offense before answering a questionnaire measuring their belief in science. Indeed, those reporting greater belief in science condemned the act more harshly. © 2013 Nature Publishing Group
Erika Check Hayden US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today. The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says. The study, published in Proceedings of the National Academy of Sciences1, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression. The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses. And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses. © 2013 Nature Publishing Group
By Laura Sanders Despite the adage, there actually is such a thing as bad publicity, a fact that brain scientists have lately discovered. A couple of high-profile opinion pieces in the New York Times have questioned the usefulness of neuroscience, claiming, as columnist David Brooks did in June, that studying brain activity will never reveal the mind. Or that neuroscience is a pesky distraction from solving real social problems, as scholar Benjamin Fong wrote on August 11. Let’s start with Brooks. Some of his complaints about brain scans, with their colorful blobs lighting up active parts of the brain, are quite legitimate. Functional MRI studies are notoriously difficult to make sense of. In fact, this powerful technology has been used to find brain activity in a dead salmon. Dubious fMRI studies do trickle into the hands of sensationalistic journalists, medical hucksters and marketers, who twist the results into self-serving sound bites. All true. But Brooks’ essay conflates the entire field of neuroscience with some bad seeds. Some studies should never have been done, others mislead people, waste resources and sensationalize their results. But for every one of those studies, countless others tell us something important about how the human brain works. Serious scientists use a huge variety of techniques — yes, even fMRI — responsibly, and interpret their results cautiously. Judging the whole enterprise of neuroscience by its weakest studies is disingenuous. There is bad science, just like there’s bad food, bad music and bad TV. Trashing all brain research because a tiny bit of it stinks is like throwing your new flat screen off a balcony because you accidentally turned on Jersey Shore. © Society for Science & the Public 2000 - 2013
By Susan Gaidos If you’re someone who enjoys being recognized, Julian Lim is your kind of waiter. Lim, who’s working his way through college waiting tables, remembers the face of everyone that walks through the door of the South Bend, Ind., restaurant where he works. His abilities go beyond making his customers feel special. This spring, when he cut his hand on broken glass, he pegged the emergency room nurse as a fellow student from his grade school days. Though they’d never spoken, and the girl had since undergone changes in appearance, Lim recognized her instantly. Carrie Shanafelt is good with faces, too. A professor of literature at Grinnell College in Iowa, Shanafelt can spot her students outside the classroom, whether it’s the first week of class or years later. And Ajay Jansari, an information technology specialist in London, often has to see a face only once to remember it, even those he meets thousands of miles from home. While some people say they never forget a face, these folks have scientific studies to back their claims. Called “super recognizers,” they’re among a small group of individuals being studied by scientists at Dartmouth College and in England to better understand how some people can recognize almost every face they have ever seen. Scientists are now putting super recognizers’ skills to the test to get a handle on how face-processing areas of the brain work to make a few people so adept at recalling faces. Findings from the studies may advance understanding of how most people categorize faces — a subject that is still poorly understood. © Society for Science & the Public 2000 - 2013
Link ID: 18558 - Posted: 08.24.2013
Posted by Dr. Sushrut Jangi The child's family and physician were making decisions about how to treat this disease. Many readers voted that starting an ADHD medication and behavioral therapy together might be a good way forward. Her doctor agrees with this approach. "A lot of judgement happens the day I talk about starting medicines for young children," Dr. Chan says. Most parents have already tried numerous other routes, such as behavioral therapy which is frequently recommended first. But behavioral therapy alone is hard to implement. "It's hard to access and there's not too many families who can actually carry it out," Chan says. "If you're a single parent working multiple jobs, its really hard to fit the time to take your child regularly. It's a huge time investment." J's parents tried the behavioral therapy route and they worked hard at it. But he wasn't improving. Dr. Chan is more than familiar with the culture of fear that surrounds ADHD medications, but she feels these fears are overinflated. Consequently, children who might benefit from being on medicine get delayed treatments, which can have harmful social effects. "Children in his class already know that he's different, so they react to him differently. Children with ADHD start getting negative feedback from their peers early on." Dr. Chan feels that this is one potential justification for starting medications early. "These medicines can help children get out of cycles of negative-feedback. And we're not condemning children to medicine for the rest of their lives. They can be started as a trial, and then stopped down the line." © 2013 NY Times Co.
By Jessica Shugart People who need sugary snacks to stay sharp throughout the day could be prisoners of their own beliefs. The brain works just fine without regular shots of sugar in people who believe their willpower is unlimited, a new study shows. “There's a dominant theory in psychology that willpower is limited, and whenever you exert yourself to do a hard task or to resist a temptation, you deplete this limited resource,” says psychologist Carol Dweck from Stanford University. Previous studies have shown that mental exertion diminishes blood glucose levels and that a person’s willpower can be rejuvenated by ingesting a sugary drink. But Dweck’s earlier work led her to suspect that people’s attitudes about willpower may be responsible for that effect. In the new study, published online August 19 in the Proceedings of the National Academy of Sciences, Dweck, along with colleagues at the University of Zurich in Switzerland, focused on how attitudes about willpower may shape a person’s sugar dependence in the face of a challenge. The scientists also tested whether altering these beliefs might liberate a person from such a calorie-rich requirement. In the first of three experiments, the researchers asked students about their attitudes on willpower, then gave them lemonade sweetened with either sugar or a sugar substitute. Ten minutes after downing the sweet beverage, the students took tests of self-control and mental acuity. The students who subscribed to a self-generating belief about unlimited willpower scored equally well whether their drinks contained sugar or not. But the students who felt willpower was limited needed sugar to perform as well as the other group did. © Society for Science & the Public 2000 - 2013
By Scott Barry Kaufman So yea, you know how the left brain is really realistic, analytical, practical, organized, and logical, and the right brain is so darn creative, passionate, sensual, tasteful, colorful, vivid, and poetic? No. Just no. Stop it. Please. Thoughtful cognitive neuroscientists such as Rex Jung, Darya Zabelina, Andreas Fink, John Kounios, Mark Beeman, Kalina Christoff, Oshin Vartanian, Jeremy Gray, Hikaru Takeuchi and others are on the forefront of investigating what actually happens in the brain during the creative process. And their findings are overturning conventional notions surrounding the neuroscience of creativity. The latest findings from the real neuroscience of creativity suggest that the right brain/left brain distinction is not the right one when it comes to understanding how creativity is implemented in the brain. Creativity does not involve a single brain region or single side of the brain. Instead, the entire creative process– from the initial burst of inspiration to the final polished product– consists of many interacting cognitive processes and emotions. Depending on the stage of the creative process, and what you’re actually attempting to create, different brain regions are recruited to handle the task. Importantly, many of these brain regions work as a team to get the job done, and many recruit structures from both the left and right side of the brain. In recent years, evidence has accumulated suggesting that “cognition results from the dynamic interactions of distributed brain areas operating in large-scale networks.” © 2013 Scientific American
by Sara Reardon It can be nearly impossible to know what is happening in the mind of someone who has experienced a severe brain injury, but two new methods could offer some clues. Together, they provide not only a better indication of consciousness but also a more effective way to communicate with some vegetative people. The way that a seemingly unconscious person behaves does not always reflect their mental state. Someone in a completely vegetative state may still be able to smile simply through reflex, while a perfectly alert person may be left unable to do so if a brain injury has affected their ability to move. So a different way to assess mental state is needed. Marcello Massimini at the University of Milan in Italy and his colleagues have developed a possible solution by stimulating brains with an electromagnetic pulse and then measuring the response. The pulse acts like striking a bell, they say, and neurons across the entire brain continue to "ring" in a specific wave pattern, depending on how active the connections between individual brain cells are. The team used this method to assess 20 people with brain injuries who were either in a vegetative state, in a minimally conscious state, or in the process of emerging from a coma. The team compared the patterns from these people with the patterns recorded from 32 healthy people who were awake, asleep or under anaesthesia. In each of the distinct states of consciousness, the researchers found, the neurons "shook" in a distinctive pattern in response to the electromagnetic pulse. © Copyright Reed Business Information Ltd
Link ID: 18517 - Posted: 08.17.2013
Kelly Servick Consciousness isn’t easy to define, but we know it when we experience it. It’s not so simple to decide when someone else is conscious, however, as doctors must sometimes do with patients who have suffered traumatic brain injury. Now, researchers have come up with an approach that uses the brain’s response to magnetic stimulation to judge a person’s awareness, reducing it to a numerical score they call an index of consciousness. “You’re kind of banging on the brain and listening to the echo,” says Anil Seth, a neuroscientist at the Sackler Centre for Consciousness Science at the University of Sussex in the United Kingdom who was not involved in the work. Faced with an unresponsive patient, clinicians do their best to determine whether the person is conscious. Through sound, touch, and other stimuli, they try to provoke verbal responses, slight finger movements, or just a shifting gaze. Yet some conscious patients simply can’t move or speak; an estimated 40% of those initially judged to be completely unaware are later found to have some level of consciousness. Recently, physicians seeking to resolve a patient’s conscious state have gone right to the source, searching for signs of awareness using brain imaging or recording electrical activity of neurons. Most of these approaches define a conscious brain as an integrated brain, where groups of cells in many different regions activate to form a cohesive pattern, explains Marcello Massimini, a neurophysiologist at the University of Milan in Italy. “But that’s not enough,” he says. Sometimes even an unconscious brain looks highly integrated. For example, stimulating the brain of a sleeping person can create a huge wave of activity that “propagates like a ripple in water.” It’s a highly synchronized, widespread pattern, but it’s not consciousness, he says, and so this measure is often unreliable for diagnosis. © 2012 American Association for the Advancement of Science.
Link ID: 18515 - Posted: 08.15.2013
By KATIE THOMAS The first test for a new sleep drug is — unsurprisingly — how safely it puts people to sleep. Now comes a second test: how safely it lets people wake up. The Food and Drug Administration is taking heightened interest in the issue, as new evidence suggests what many people have long suspected: the effects of common prescription sleep aids like Ambien can persist well into the next day. Of particular concern is whether people who take the drugs before bed can drive safely the next morning. Consumer advocates have warned for years about possible links between sleep drugs and car accidents. In one prominent example, Kerry Kennedy, the former wife of Gov. Andrew M. Cuomo, was arrested last year after tests showed she had taken a sleep aid before swerving her car into a tractor-trailer. The F.D.A.’s actions are part of a robust national conversation about how to cope with the throngs of drivers who take to roads every day under the influence of prescription drugs. Law enforcement authorities have struggled with how to prosecute those who are impaired, especially when they have a prescription. A government survey in 2007 found that nearly 5 percent of daytime drivers tested positive for prescription or over-the-counter medications. Doctors wrote close to 60 million prescriptions for sleep aids in the United States last year, according to the research firm IMS Health, but experts say testing how these drugs affect driving is not easy. Nonetheless, the F.D.A. has been unusually active. Last month, it rejected an application by Merck to approve a new sleep drug, suvorexant, in part because tests showed that some people had trouble driving the next day. In May, the agency warned patients taking common allergy drugs like Benadryl against driving, noting that the sedating effects can sometimes last into the following day. In January, citing similar concerns, the F.D.A. took the unusual step of requiring that all manufacturers of zolpidem, the generic name of Ambien, cut in half the dosage for women. © 2013 The New York Times Company
By Laura Sanders Seeing people of different races early in life may sculpt the developing brain, a new study suggests. Children who spent infancy in Chinese or Russian orphanages with little contact from outsiders had difficulty perceiving emotions on faces of people of unfamiliar races. These children also showed heightened brain responses to faces of unfamiliar races. “This new study is unique in that it for the first time tells us that early exposure to faces of different races is important,” says psychologist Kang Lee of the University of Toronto. “The lack of such exposure can have long-lasting effects.” Although the results, published in the Aug. 14 Journal of Neuroscience, suggest that race shapes the brain during infancy, the study can’t say what such a brain change might mean, says study coauthor Eva Telzer of the University of Illinois at Urbana-Champaign. “Our findings do not say anything about children’s behavior in their daily life.” Telzer and her colleagues studied one of the few populations that could help reveal these effects: orphans who the researchers believe lived amid a single race of people early in life. Most of these 36 children spent time in Russian or Chinese orphanages and were later adopted by American families of European descent. On average, the kids were adopted when they were 2 to 3 years old and were between 6 and 16 years old at the time of the study. © Society for Science & the Public 2000 - 2013
by Sara Reardon It's a case of hear no object, see no object. Hearing the name of an object appears to influence whether or not we see it, suggesting that hearing and vision might be even more intertwined than previously thought. Studies of how the brain files away concepts suggest that words and images are tightly coupled. What is not clear, says Gary Lupyan of the University of Wisconsin in Madison, is whether language and vision work together to help you interpret what you're seeing, or whether words can actually change what you see. Lupyan and Emily Ward of Yale University used a technique called continuous flash suppression (CFS) on 20 volunteers to test whether a spoken prompt could make them detect an image that they were not consciously aware they were seeing. CFS works by displaying different images to the right and left eyes: one eye might be shown a simple shape or an animal, for example, while the other is shown visual "noise" in the form of bright, randomly flickering shapes. The noise monopolises the brain, leaving so little processing power for the other image that the person does not consciously register it, making it effectively invisible. Wheels of perception In a series of CFS experiments, the researchers asked volunteers whether or not they could see a specific object, such as a dog. Sometimes it was displayed, sometimes not. When it was not displayed or when the image was of another animal such as a zebra or kangaroo, the volunteers typically reported seeing nothing. But when a dog was displayed and the question mentioned a dog, the volunteers were significantly more likely to become aware of it. "If you hear a word, that greases the wheels of perception," says Lupyan: the visual system becomes primed for anything to do with dogs. © Copyright Reed Business Information Ltd.
What people experience as death creeps in—after the heart stops and the brain becomes starved of oxygen—seems to lie beyond the reach of science. But the authors of a new study on dying rats make a bold claim: After cardiac arrest, the rodents’ brains enter a state similar to heightened consciousness in humans. The researchers suggest that if the same is true for people, such brain activity could be the source of the visions and other sensations that make up so-called near-death experiences. Estimated to occur in about 20% of patients who survive cardiac arrest, near-death experiences are frequently described as hypervivid or “realer-than-real,” and often include leaving the body and observing oneself from outside, or seeing a bright light. The similarities between these reports are hard to ignore, but the conversation about near-death experiences often bleeds into metaphysics: Are these visions produced solely by the brain, or are they a glimpse at an afterlife outside the body? Neurologist Jimo Borjigin of the University of Michigan, Ann Arbor, got interested in near-death experiences during a different project—measuring the hormone levels in the brains of rodents after a stroke. Some of the animals in her lab died unexpectedly, and her measurements captured a surge in neurochemicals at the moment of their death. Previous research in rodents and humans has shown that electrical activity surges in the brain right after the heart stops, then goes flat after a few seconds. Without any evidence that this final blip contains meaningful brain activity, Borjigin says “it’s perhaps natural for people to assume that [near-death] experiences came from elsewhere, from more supernatural sources.” But after seeing those neurochemical surges in her animals, she wondered about those last few seconds, hypothesizing that even experiences seeming to stretch for days in a person’s memory could originate from a brief “knee-jerk reaction” of the dying brain. © 2012 American Association for the Advancement of Science.
Link ID: 18501 - Posted: 08.13.2013
For most people, seeing a picture of a famous face — Oprah Winfrey, the Queen or Einstein, for instance — sparks immediate recognition and brings the name readily to the lips. But for people with a rare form of early-onset dementia called primary progressive aphasia, or PPA, the ability to identify a face or the person's name can be impaired. PPA strikes people aged about 40 to 65, much earlier than is typical for other forms of dementia like Alzheimer's disease. The condition is characterized by a deterioration in language and eventually the ability to communicate, although at least initially cognitive function in other areas remains intact, said Tamar Gefen, a PhD candidate in clinical neuropsychology at Northwestern University in Chicago. "Memory is fine, attention is fine and their planning, their judgment, their personality, their emotions — they're intact," explained Gefen, adding that early symptoms can include being unable to recall the names of familiar people or in some cases everyday objects. "Someone will come in and say: 'I can't remember my co-worker's name. I see her every day and I cannot remember it,"' she said. As the disease progresses, the person has difficulty speaking coherently and eventually stops talking altogether. Since the inability to put a name to a face can be an early sign of Alzheimer's disease, Gefen said it's important to properly diagnose the cause using specific tests that can identify PPA. © CBC 2013