Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Maggie Koerth-Baker In 1990, when James Danckert was 18, his older brother Paul crashed his car into a tree. He was pulled from the wreckage with multiple injuries, including head trauma. The recovery proved difficult. Paul had been a drummer, but even after a broken wrist had healed, drumming no longer made him happy. Over and over, Danckert remembers, Paul complained bitterly that he was just — bored. “There was no hint of apathy about it at all,” says Danckert. “It was deeply frustrating and unsatisfying for him to be deeply bored by things he used to love.” A few years later, when Danckert was training to become a clinical neuropsychologist, he found himself working with about 20 young men who had also suffered traumatic brain injury. Thinking of his brother, he asked them whether they, too, got bored more easily than they had before. “And every single one of them,” he says, “said yes.” Those experiences helped to launch Danckert on his current research path. Now a cognitive neuroscientist at the University of Waterloo in Canada, he is one of a small but growing number of investigators engaged in a serious scientific study of boredom. There is no universally accepted definition of boredom. But whatever it is, researchers argue, it is not simply another name for depression or apathy. It seems to be a specific mental state that people find unpleasant — a lack of stimulation that leaves them craving relief, with a host of behavioural, medical and social consequences. © 2016 Nature Publishing Group
Don’t blame impulsive people for their poor decisions. It’s not necessarily their fault. Impulsivity could result from not having enough time to veto our own actions. At least that is the implication of a twist on a classic experiment on free will. In 1983, neuroscientist Benjamin Libet performed an experiment to test whether we have free will. Participants were asked to voluntarily flex a finger while watching a clock-face with a rotating dot. They had to note the position of the dot as soon as they became aware of their intention to act. As they were doing so, Libet recorded their brain activity via EEG electrodes attached to the scalp. He found that a spike in brain activity called the readiness potential, which precedes a voluntary action, occurred about 350-milliseconds before the volunteers became consciously aware of their intention to act. The readiness potential is thought to signal the brain preparing for movement. Libet interpreted his results to mean that free will is an illusion. But we’re not complete slaves to our neurons, he reasoned, as there was a 200-millisecond gap between conscious awareness of our intention and the initiation of movement. Libet argued that this was enough time to consciously veto the action, or exert our “free won’t”. While Libet’s interpretations have remained controversial, this hasn’t stopped scientists carrying out variations of his experiment. Among other things, this has revealed that people with Tourette’s syndrome, who have uncontrollable tics, experience a shorter veto window than people without the condition, as do those with schizophrenia. © Copyright Reed Business Information Ltd.
By Melissa Healy A new study finds that policies on defining brain death vary from hospital to hospital and could result in serious errors. Since 2010, neurologists have had a clear set of standards and procedures to distinguish a brain-dead patient from one who might emerge from an apparent coma. But when profoundly unresponsive patients are rushed to hospitals around the nation, the physicians who make the crucial call are not always steeped in the diagnostic fine points of brain death and the means of identifying it with complete confidence. State laws governing the diagnosis of brain death vary widely. Some states allow any physician to make the diagnosis, while others dictate the level of specialty a physician making the call must have. Some require that a second physician confirm the diagnosis or that a given period of time elapse. Others make no such demands. Given these situations, hospital policies can be invaluable guides for physicians, hospital administrators and patients’ families. In the absence of consistent physician expertise or legal requirements, hospital protocols can translate a scientific consensus into a step-by-step checklist. That would help ensure that no one who is not brain-dead is denied further care or considered a potential organ donor and that the deceased and their families would have every opportunity to donate organs.
Link ID: 21749 - Posted: 01.05.2016
By KARL OVE KNAUSGAARD I arrived in Tirana, Albania, on a Sunday evening in late August, on a flight from Istanbul. The sun had set while the plane was midflight, and as we landed in the dark, images of fading light still filled my mind. The man next to me, a young, red-haired American wearing a straw hat, asked me if I knew how to get into town from the airport. I shook my head, put the book I had been reading into my backpack, got up, lifted my suitcase out of the overhead compartment and stood waiting in the aisle for the door up ahead to open. That book was the reason I had come. It was called “Do No Harm,” and it was written by the British neurosurgeon Henry Marsh. His job is to slice into the brain, the most complex structure we know of in the universe, where everything that makes us human is contained, and the contrast between the extremely sophisticated and the extremely primitive — all of that work with knives, drills and saws — fascinated me deeply. I had sent Marsh an email, asking if I might meet him in London to watch him operate. He wrote a cordial reply saying that he seldom worked there now, but he was sure something could be arranged. In passing, he mentioned that he would be operating in Albania in August and in Nepal in September, and I asked hesitantly whether I could join him in Albania. Now I was here. Tense and troubled, I stepped out of the door of the airplane, having no idea what lay ahead. I knew as little about Albania as I did about brain surgery. The air was warm and stagnant, the darkness dense. A bus was waiting with its engine running. Most of the passengers were silent, and the few who chatted with one another spoke a language I didn’t know. It struck me that 25 years ago, when this was among the last remaining Communist states in Europe, I would not have been allowed to enter; then, the country was closed to the outside world, almost like North Korea today. Now the immigration officer barely glanced at my passport before stamping it. She dully handed it back to me, and I entered Albania. © 2015 The New York Times Company
Link ID: 21739 - Posted: 12.30.2015
By Diana Kwon Pupils are a rich source of social information. Although changes in pupil size are automatic and uncontrollable, they can convey interest, arousal, helpful or harmful intentions, and a variety of emotions. According to a new study published in Psychological Science, we even synchronize our pupil size with others—and doing so influences social decisions. Mariska Kret, a psychologist now at the University of Amsterdam in the Netherlands, and her colleagues recruited 69 Dutch university students to take part in an investment game. Each participant decided whether to transfer zero or five euros to a virtual partner after viewing a video of their eyes for four seconds. The invested money is tripled, and the receiver chooses how much to give back to the donor—so subjects had to make quick decisions about how trustworthy each virtual partner seemed. Using an eye tracker, the investigators found that the participants' pupils tended to mimic the changes in the partners' pupils, whether they dilated, constricted or remained static. As expected, subjects were more likely to give more money to partners with dilating pupils, a well-established signal of nonthreatening intentions. The more a subject mirrored the dilating pupils of a partner, the more likely he or she was to invest—but only if they were of the same race. The Caucasian participants trusted Caucasian eyes more than Asian eyes—which suggests that group membership is important when interpreting these subtle signals. © 2015 Scientific American
James Bond's villain in the latest 007 film, Spectre, could use a lesson in neuroanatomy, a Toronto neurosurgeon says. In a scene recorded in a Morroccan desert, Ernst Stavro Blofeld, played by Christoph Waltz, tortures Bond using restraints and a head clamp fused with a robotic drill. The goal is to inflict pain and erase 007's memory bank of faces. But Blofeld didn't have his brain anatomy down and could have likely killed Daniel Craig's character instead, Dr. Michael Cusimano of St. Michael's Hospital, says in a letter published in this week's issue of the journal Nature. Aiming to erase Bond's memory of faces, the villain correctly intends to drill into the lateral fusiform gyrus, an area of the brain responsible for recognizing faces, Cusimano said. But in practice, the drill was placed in the wrong area, aiming for the neck instead of the brain. "Whereas the drill should have been aimed just in front of 007's ear, it was directed below the mastoid process under and behind his left ear," Cusimano wrote. It likely would have triggered a stroke or massive hemorrhage, he said. In a draft of the letter, Cusimano said he was "spellbound" watching the film in a packed theatre, but his enjoyment was somewhat marred by the blunder. "I laughed," he recalled in an interview. "I think people around me kind of looked at me and were wondering why I was laughing because it's a pretty tense part of the movie." ©2015 CBC/Radio-Canada.
Link ID: 21726 - Posted: 12.27.2015
By Ferris Jabr Matthew Brien has struggled with overeating for the past 20 years. At age 24, he stood at 5′10′′ and weighed a trim 135 pounds. Today the licensed massage therapist tips the scales at 230 pounds and finds it particularly difficult to resist bread, pasta, soda, cookies and ice cream—especially those dense pints stuffed with almonds and chocolate chunks. He has tried various weight-loss programs that limit food portions, but he can never keep it up for long. “It's almost subconscious,” he says. “Dinner is done? Okay, I am going to have dessert. Maybe someone else can have just two scoops of ice cream, but I am going to have the whole damn [container]. I can't shut those feelings down.” Eating for the sake of pleasure, rather than survival, is nothing new. But only in the past several years have researchers come to understand deeply how certain foods—particularly fats and sweets—actually change brain chemistry in a way that drives some people to overconsume. Scientists have a relatively new name for such cravings: hedonic hunger, a powerful desire for food in the absence of any need for it; the yearning we experience when our stomach is full but our brain is still ravenous. And a growing number of experts now argue that hedonic hunger is one of the primary contributors to surging obesity rates in developed countries worldwide, particularly in the U.S., where scrumptious desserts and mouthwatering junk foods are cheap and plentiful. “Shifting the focus to pleasure” is a new approach to understanding hunger and weight gain, says Michael Lowe, a clinical psychologist at Drexel University who coined the term “hedonic hunger” in 2007. © 2015 Scientific American
By JOSEPH LEDOUX IN this age of terror, we struggle to figure out how to protect ourselves — especially, of late, from active shooters. One suggestion, promoted by the Federal Bureau of Investigation and Department of Homeland Security, and now widely disseminated, is “run, hide, fight.” The idea is: Run if you can; hide if you can’t run; and fight if all else fails. This three-step program appeals to common sense, but whether it makes scientific sense is another question. Underlying the idea of “run, hide, fight” is the presumption that volitional choices are readily available in situations of danger. But the fact is, when you are in danger, whether it is a bicyclist speeding at you or a shooter locked and loaded, you may well find yourself frozen, unable to act and think clearly. Freezing is not a choice. It is a built-in impulse controlled by ancient circuits in the brain involving the amygdala and its neural partners, and is automatically set into motion by external threats. By contrast, the kinds of intentional actions implied by “run, hide, fight” require newer circuits in the neocortex. Contemporary science has refined the old “fight or flight” concept — the idea that those are the two hard-wired options when in mortal danger — to the updated “freeze, flee, fight.” While “freeze, flee, fight” is superficially similar to “run, hide, fight,” the two expressions make fundamentally different assumptions about how and why we do what we do, when in danger. Why do we freeze? It’s part of a predatory defense system that is wired to keep the organism alive. Not only do we do it, but so do other mammals and other vertebrates. Even invertebrates — like flies — freeze. If you are freezing, you are less likely to be detected if the predator is far away, and if the predator is close by, you can postpone the attack (movement by the prey is a trigger for attack). © 2015 The New York Times Company
Scientists showed that they could alter brain activity of rats and either wake them up or put them in an unconscious state by changing the firing rates of neurons in the central thalamus, a region known to regulate arousal. The study, published in eLIFE, was partially funded by the National Institutes of Health. “Our results suggest the central thalamus works like a radio dial that tunes the brain to different states of activity and arousal,” said Jin Hyung Lee, Ph.D., assistant professor of neurology, neurosurgery and bioengineering at Stanford University, and a senior author of the study. Located deep inside the brain the thalamus acts as a relay station sending neural signals from the body to the cortex. Damage to neurons in the central part of the thalamus may lead to problems with sleep, attention, and memory. Previous studies suggested that stimulation of thalamic neurons may awaken patients who have suffered a traumatic brain injury from minimally conscious states. Dr. Lee’s team flashed laser pulses onto light sensitive central thalamic neurons of sleeping rats, which caused the cells to fire. High frequency stimulation of 40 or 100 pulses per second woke the rats. In contrast, low frequency stimulation of 10 pulses per second sent the rats into a state reminiscent of absence seizures that caused them to stiffen and stare before returning to sleep. “This study takes a big step towards understanding the brain circuitry that controls sleep and arousal,” Yejun (Janet) He, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke (NINDS).
Link ID: 21711 - Posted: 12.19.2015
By Geoffrey S. Holtzman In November 1834, a 9-year-old boy named Major Mitchell was tried in Maine on one charge of maiming and one charge of felonious assault with intent to maim. He had lured an 8-year-old classmate into a field, beaten him with sticks, attempted to drown him in a stream, and castrated him with a piece of tin. Yet what makes this case so remarkable is neither the age of the defendant nor the violence of his crime, but the nature of his trial. Mitchell’s case marks the first time in U.S. history that a defendant’s attorney sought leniency from a jury on account of there being something wrong with the defendant’s brain. More recently, there has been an explosion in the number of criminals who have sought leniency on similar grounds. While the evidence presented by Mitchell’s defense was long ago debunked as pseudoscience (and was rightly dismissed by the judge), the case for exculpating Major Mitchell may actually be stronger today than it was 181 years ago. In a curious historical coincidence, recent advances in neuroscience suggest that there really might have been something wrong with Major Mitchell’s brain and that neurological deficits really could have contributed to his violent behavior. The case provides a unique window through which to view the relationship between 19th-century phrenology—the pseudoscientific study of the skull as an index of mental faculties—and 21st-century neuroscience. As you might expect, there is a world of difference between the two, but maintaining that difference depends crucially on the responsible use of neuroscience. Major Mitchell’s story cautions against overlooking neuroscience’s limitations, as well as its ability to be exploited for suspect purposes. © 2015 The Slate Group LLC.
The road map of conscious awareness has been deciphered. Now that we know which brain pathways control whether someone is awake or unconscious, we may be able to rouse people from a vegetative or minimally conscious state. In 2007, researchers used deep brain stimulation to wake a man from a minimally conscious state. It was quite remarkable, says Jin Lee at Stanford University in California. The 38-year-old had suffered a severe brain injury in a street mugging six years earlier. Before his treatment he was unable to communicate and had no voluntary control over his limbs. When doctors stimulated his thalamus – a central hub that sends signals all around the brain – his speech and movement gradually returned. However, attempts to treat other people in a similar way have failed. The problem lies with the crudeness of the technique. “Deep brain stimulation is done without much knowledge of how it actually alters the circuits in the brain,” says Lin. The technique involves attaching electrodes to the brain and using them to stimulate the tissue beneath. Unfortunately, the electrodes can also stimulate unintended areas, which means it is hard to work out exactly what is happening in people’s brains. “There are a lot of fibres and different cells in the thalamus and working out what was going on in the brain was very difficult,” says Lin. “So we wanted to figure it out.” © Copyright Reed Business Information Ltd.
By Ariana Eunjung Cha Attention-deficit/hyperactivity disorder is often thought of a boy thing. In explaining the jump in cases in recent years, numerous researchers, educators and parents have theorized that perhaps boys are hardwired to be more impulsive, wiggly and less able to stay on task in the early years than their female counterparts. That may be a myth. A study published in The Journal of Clinical Psychiatry on Tuesday shows a surprising 55 percent increase in prevalence of diagnoses among girls — from 4.7 percent to 7.3 percent from 2003 to 2011. The rise in cases in girls mirrors a similar but less-sharp rise in cases in boys from a prevalence of 11.8 to 16.5 percent. During the same period, the researchers found an increase in cases across all races and ethnicities but especially in Hispanic children. In all children, the prevalence increased from 8.4 percent to 12 percent. The analysis, conducted by George Washington University biostatistician Sean D. Cleary and his co-author Kevin P. Collins of Mathematica Policy Research, was based on data from the National Survey of Children's Health in which parents were asked whether they had been told by a doctor or other health care provider that their child has ADHD.
By John Horgan How does matter make mind? More specifically, how does a physical object generate subjective experiences like those you are immersed in as you read this sentence? How does stuff become conscious? This is called the mind-body problem, or, by philosopher David Chalmers, the “hard problem.” I expressed doubt that the hard problem can be solved--a position called mysterianism--in The End of Science. I argue in a new edition that my pessimism has been justified by the recent popularity of panpsychism. This ancient doctrine holds that consciousness is a property not just of brains but of all matter, like my table and coffee mug. Panpsychism strikes me as self-evidently foolish, but non-foolish people—notably Chalmers and neuroscientist Christof Koch—are taking it seriously. How can that be? What’s compelling their interest? Have I dismissed panpsychism too hastily? These questions lured me to a two-day workshop on integrated information theory at New York University last month. Conceived by neuroscientist Guilio Tononi (who trained under the late, great Gerald Edelman), IIT is an extremely ambitious theory of consciousness. It applies to all forms of matter, not just brains, and it implies that panpsychism might be true. Koch and others are taking panpsychism seriously because they take IIT seriously. © 2015 Scientific American
Link ID: 21673 - Posted: 12.03.2015
Aimee Cunningham For a child with attention deficit hyperactivity disorder, meeting the daily expectations of home and school life can be a struggle that extends to bedtime. The stimulant medications commonly used to treat ADHD can cause difficulty falling and staying asleep, a study finds. And that can make the next day that much harder. As parents are well aware, sleep affects a child's emotional and physical well-being, and it is no different for those with ADHD. "Poor sleep makes ADHD symptoms worse," says Katherine M. Kidwell, a doctoral student in clinical psychology at the University of Nebraska, Lincoln, who led the study. "When children with ADHD don't sleep well, they have problems paying attention the next day, and they are more impulsive and emotionally reactive." Stimulant medications boost alertness, and some studies have found a detrimental effect on children's sleep. However, other studies have concluded that the stimulants' ameliorating effects improve sleep. The drugs include amphetamines such as Adderall and methylphenidate such as Ritalin. To reconcile the mixed results on stimulants and children's sleep, Kidwell and her colleagues undertook a meta-analysis, a type of study that summarizes the results of existing research. The team found nine studies that met their criteria. These studies compared children who were taking stimulant medication with those who weren't. The studies also randomly assigned children to the experimental group or the control group and used objective measures of sleep quality and quantity, such as assessing sleep in a lab setting or with a wristwatch-like monitor at home rather than a parent's report. © 2015 npr
By Virginia Morell Was that fish on your plate once a sentient being? Scientists have long believed that the animals aren’t capable of the same type of conscious thought we are because they fail the “emotional fever” test. When researchers expose birds, mammals (including humans), and at least one species of lizard to new environments, they experience a slight rise in body temperature of 1°C to 2°C that lasts a while; it’s a true fever, as if they were responding to an infection. The fever is linked to the emotions because it’s triggered by an outside stimulus, yet produces behavioral and physiological changes that can be observed. Some scientists argue that these only occur in animals with sophisticated brains that sense and are conscious of what’s happening to them. Previous tests suggested that toads and fish don’t respond this way. Now, a new experiment that gave the fish more choices shows the opposite. Researchers took 72 zebrafish and either did nothing with them or placed them alone in a small net hanging inside a chamber in their tank with water of about 27°C; zebrafish prefer water of about 28°C. After 15 minutes in the net, the team released the confined fish. They could then freely swim among the tank’s five other chambers, each heated to a different temperature along a gradient from 17.92°C to 35°C. (The previous study used a similar setup but gave goldfish a choice between only two chambers, both at higher temperatures.) The stressed fish spent more time—between 4 and 8 hours—in the warmer waters than did the control fish, and raised their body temperatures about 2°C to 4°C, showing an emotional fever, the scientists report online today in the Proceedings of the Royal Society B. Thus, their study upends a key argument against consciousness in fish, they say. © 2015 American Association for the Advancement of Science.
Jon Hamilton A look at the brain's wiring can often reveal whether a person has trouble staying focused, and even whether they have attention deficit hyperactivity disorder, known as ADHD. A team led by researchers at Yale University reports that they were able to identify many children and adolescents with ADHD by studying data on the strength of certain connections in their brains. "There's an intrinsic signature," says Monica Rosenberg, a graduate student and lead author of the study in Nature Neuroscience. But the approach isn't ready for use as a diagnostic tool yet, she says. The finding adds to the evidence that people with ADHD have a true brain disorder, not just a behavioral problem, says Mark Mahone, director of neuropsychology at the Kennedy Krieger institute in Baltimore. "There are measurable ways that their brains are different," he says. The latest finding came from an effort to learn more about brain connections associated with attention. Initially, the Yale team used functional MRI, a form of magnetic resonance imaging, to monitor the brains of 25 typical people while they did something really boring. Their task was to watch a screen that showed black-and-white images of cities or mountains and press a button only when they saw a city. © 2015 npr
Alva Noë For some time now, I've been skeptical about the neuroscience of consciousness. Not so much because I doubt that consciousness is affected by neural states and processes, but because of the persistent tendency on the part of some neuroscientists to think of consciousness itself as a neural phenomenon. Nothing epitomizes this tendency better than Francis Crick's famous claim — he called it his "astonishing hypothesis" — that you are your brain. At an interdisciplinary conference at Brown not so long ago, I heard a prominent neuroscientist blandly assert, as if voicing well-established scientific fact, that thoughts, feelings and beliefs are specific constellations of matter that are located (as it happens) inside the head. My own view — I laid this out in a book I wrote a few years back called Out of Our Heads — is that the brain is only part of the story, and that we can only begin to understand how the brain makes us consciousness by realizing that brain functions only in the setting of our bodies and our broader environmental (including our social and cultural) situation. The skull is not a magical membrane, my late collaborator, friend and teacher Susan Hurley used to say. And there is no reason to think the processes supporting consciousness are confined to what happens only on one side (the inside) of that boundary. There is a nice interview on the Oxford University Press website with Anil Seth, the editor of a new Oxford journal Neuroscience of Consciousness. It's an informative discussion and makes the valuable point that the study of consciousness is interdisciplinary. © 2015 npr
Link ID: 21631 - Posted: 11.14.2015
By Katherine Ellison Last year, Sinan Sonmezler of Istanbul refused to keep going to school. His eighth-grade classmates called him “weird” and “stupid,” and his teachers rebuked him for his tendency to stare out the window during class. The school director told his parents he was “lazy.” Sinan has attention-deficit hyperactivity disorder, a condition still little understood in many parts of the world. “He no longer believes he can achieve anything, and has quit trying,” said Sinan’s father, Umit Sonmezler, a mechanical engineer. While global diagnoses of A.D.H.D. are on the rise, public understanding of the disorder has not kept pace. Debates about the validity of the diagnosis and the drugs used to treat it — the same that have long polarized Americans — are now playing out from Northern and Eastern Europe to the Middle East and South America. Data from various nations tell a story of rapid change. In Germany, A.D.H.D. diagnosis rates rose 381 percent from 1989 to 2001. In the United Kingdom, prescriptions for A.D.H.D. medications rose by more than 50 percent in five years to 657,000 in 2012, up from 420,000 in 2007. Consumption of A.D.H.D. medications doubled in Israel from 2005 to 2012. The surge in use of the medications has prompted skepticism that pharmaceutical firms, chasing profits in an $11 billion international market for A.D.H.D. drugs, are driving the global increase in diagnoses. In 2007, countries outside the United States accounted for only 17 percent of the world’s use of Ritalin. By 2012, that number had grown to 34 percent. © 2015 The New York Times Company
Link ID: 21618 - Posted: 11.10.2015
Doubts are emerging about one of our leading models of consciousness. It seems that brain signals thought to reflect consciousness are also generated during unconscious activity. A decade of studies have lent credence to the global neuronal workspace theory of consciousness, which states that when something is perceived unconsciously, or subliminally, that information is processed locally in the brain. In contrast, conscious perception occurs when the information is broadcast to a “global workspace”, or assemblies of neurons distributed across various brain regions, leading to activity over the entire network. Proponents of this idea, Stanislas Dehaene at France’s national institute for health in Gif-sur-Yvette, and his colleagues, discovered that when volunteers view stimuli that either enter conscious awareness or don’t, their brains show identical EEG activity for the first 270 milliseconds. Then, if perception of the stimuli is subliminal, the brain activity peters out. However, when volunteers become conscious of the stimuli, there is a sudden burst of widespread brain activity 300 ms after the stimulus. This activity is characterised by an EEG signal called P3b, and has been called a neural correlate of consciousness. Brian Silverstein and Michael Snodgrass at the University of Michigan in Ann Arbor, and colleagues wondered if P3b could be detected during unconscious processing of stimuli. © Copyright Reed Business Information Ltd.
Link ID: 21603 - Posted: 11.05.2015
Scientists have come up with a questionnaire they say should help diagnose a condition called face blindness. Prosopagnosia, as doctors call it, affects around two in every 100 people in the UK and is the inability to recognise people by their faces alone. In its most extreme form, people cannot even recognise their family or friends. Milder forms, while still distressing, can be tricky to diagnose, which is why tests are needed. People with prosopagnosia often use non-facial cues to recognise others, such as their hairstyle, clothes, voice, or distinctive features. Some may be unaware they have the condition, instead believing they have a "bad memory for faces". But prosopagnosia is entirely unrelated to intelligence or broader memory ability. One [anonymous] person with prosopagnosia explains: "My biggest problem is seeing the difference between ordinary-looking people, especially faces with few specific traits. "I work at a hospital with an awful lot of employees and I often introduce myself to colleagues with whom I have worked several times before. I also often have problems recognising my next-door neighbour, even though we have been neighbours for eight years now. She often changes clothes, hairstyle and hair colour. When I strive to recognise people, I try to use technical clues like clothing, hairstyle, scars, glasses, their dialect and so on." Doctors can use computer-based tests to see if people can spot famous faces and memorise and recognise a set of unfamiliar faces. And now Drs Richard Cook, Punit Shah and City University London and Kings College London have come up with a 20-item questionnaire to help measure the severity of someone's face blindness. © 2015 BBC
Link ID: 21598 - Posted: 11.04.2015