Links for Keyword: Attention
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
by Helen Thomson DRAW a line across a page, then write on it what you had for dinner yesterday and what you plan to eat tomorrow. If you are a native English speaker, or hail from pretty much any European country, you no doubt wrote last night's meal to the left of tomorrow night's. That's because we construct mental timelines to represent and reason about time, and most people in the West think of the past as on the left, and the future as on the right. Arnaud Saj at the University of Geneva, Switzerland, and his colleagues wondered whether the ability to conjure up a mental timeline is a necessary part of reasoning about events in time. To investigate, they recruited seven Europeans with what's called left hemispatial neglect. That means they have damage to parts of the right side of their brain, limiting their ability to detect, identify and interact with objects in the left-hand side of space. They may eat from only the right side of a plate, shave just the right side of their face, and ignore numbers on the left side of a clock. The team also recruited seven volunteers who had damage to the right side of their brain but didn't have hemispatial neglect, and seven people with undamaged brains. All the volunteers took part in a variety of memory tests. First, they learned about a fictional man called David. They were shown pictures of what David liked to eat 10 years ago, and what he might like to eat in 10 years' time. Participants were then shown drawings of 10 of David's favourite foods, plus four food items they hadn't seen before. Participants had to say whether it was a food that David liked in the past or might like in future. The tests were repeated with items in David's apartment, and his favourite clothes. © Copyright Reed Business Information Ltd.
Associated Press A sophisticated, real-world study confirms that dialing, texting or reaching for a cell phone while driving raises the risk of a crash or near-miss, especially for younger drivers. But the research also produced a surprise: Simply talking on the phone did not prove dangerous, as it has in other studies. This one did not distinguish between handheld and hands-free devices - a major weakness. And even though talking doesn't require drivers to take their eyes off the road, it's hard to talk on a phone without first reaching for it or dialing a number - things that raise the risk of a crash, researchers note. Earlier work with simulators, test tracks and cell phone records suggests that risky driving increases when people are on cell phones, especially teens. The 15- to 20-year-old age group accounts for 6 percent of all drivers but 10 percent of traffic deaths and 14 percent of police-reported crashes with injuries. For the new study, researchers at the Virginia Tech Transportation Institute installed video cameras, global positioning systems, lane trackers, gadgets to measure speed and acceleration, and other sensors in the cars of 42 newly licensed drivers 16 or 17 years old, and 109 adults with an average of 20 years behind the wheel. © 2014 Hearst Communications, Inc.
Tomas Jivanda Being pulled into the world of a gripping novel can trigger actual, measurable changes in the brain that linger for at least five days after reading, scientists have said. The new research, carried out at Emory University in the US, found that reading a good book may cause heightened connectivity in the brain and neurological changes that persist in a similar way to muscle memory. The changes were registered in the left temporal cortex, an area of the brain associated with receptivity for language, as well as the the primary sensory motor region of the brain. Neurons of this region have been associated with tricking the mind into thinking it is doing something it is not, a phenomenon known as grounded cognition - for example, just thinking about running, can activate the neurons associated with the physical act of running. “The neural changes that we found associated with physical sensation and movement systems suggest that reading a novel can transport you into the body of the protagonist,” said neuroscientist Professor Gregory Berns, lead author of the study. “We already knew that good stories can put you in someone else’s shoes in a figurative sense. Now we’re seeing that something may also be happening biologically.” 21 students took part in the study, with all participants reading the same book - Pompeii, a 2003 thriller by Robert Harris, which was chosen for its page turning plot. “The story follows a protagonist, who is outside the city of Pompeii and notices steam and strange things happening around the volcano,” said Prof Berns. “It depicts true events in a fictional and dramatic way. It was important to us that the book had a strong narrative line.” © independent.co.uk
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 19080 - Posted: 12.31.2013
Oliver Burkeman As we stumble again into the season of overindulgence – that sacred time of year when wine, carbs and sofas replace brisk walks for all but the most virtuous – a headline in the (excellent) new online science magazine Nautilus catches my eye: "What If Obesity Is Nobody's Fault?" The article describes new research on mice: a genetic alteration, it appears, can make them obese, despite eating no more than others. "Many of us unfortunately have had an attitude towards obese people [as] having a lack of willpower or self-control," one Harvard researcher is quoted as saying. "It's clearly something beyond that." No doubt. But that headline embodies an assumption that's rarely questioned. Suppose, hypothetically, obesity were solely a matter of willpower: laying off the crisps, exercising and generally bucking your ideas up. What makes us so certain that obesity would be the fault of the obese even then? This sounds like the worst kind of bleeding-heart liberalism, a condition from which I probably suffer (I blame my genes). But it's a real philosophical puzzle, with implications reaching far beyond obesity to laziness in all contexts, from politicians' obsession with "hardworking families" to the way people beat themselves up for not following through on their plans. We don't blame people for most physical limitations (if you broke your leg, it wouldn't be a moral failing to cancel your skydiving trip), nor for many other impediments: it's hardly your fault if you're born into educational or economic disadvantage. Yet almost everyone treats laziness and weakness of will as exceptions. If you can't be bothered to try, you've only yourself to blame. It's a rule some apply most harshly to themselves, mounting epic campaigns of self-chastisement for procrastinating, failing to exercise and so on. © 2013 Guardian News and Media Limited
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19034 - Posted: 12.14.2013
By Emilie Reas Did you make it to work on time this morning? Go ahead and thank the traffic gods, but also take a moment to thank your brain. The brain’s impressively accurate internal clock allows us to detect the passage of time, a skill essential for many critical daily functions. Without the ability to track elapsed time, our morning shower could continue indefinitely. Without that nagging feeling to remind us we’ve been driving too long, we might easily miss our exit. But how does the brain generate this finely tuned mental clock? Neuroscientists believe that we have distinct neural systems for processing different types of time, for example, to maintain a circadian rhythm, to control the timing of fine body movements, and for conscious awareness of time passage. Until recently, most neuroscientists believed that this latter type of temporal processing – the kind that alerts you when you’ve lingered over breakfast for too long – is supported by a single brain system. However, emerging research indicates that the model of a single neural clock might be too simplistic. A new study, recently published in the Journal of Neuroscience by neuroscientists at the University of California, Irvine, reveals that the brain may in fact have a second method for sensing elapsed time. What’s more, the authors propose that this second internal clock not only works in parallel with our primary neural clock, but may even compete with it. Past research suggested that a brain region called the striatum lies at the heart of our central inner clock, working with the brain’s surrounding cortex to integrate temporal information. For example, the striatum becomes active when people pay attention to how much time has passed, and individuals with Parkinson’s Disease, a neurodegenerative disorder that disrupts input to the striatum, have trouble telling time. © 2013 Scientific American
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 18978 - Posted: 11.27.2013
Ed Yong A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced. Psychology has been buffeted in recent years by mounting concern over the reliability of its results, after repeated failures to replicate classic studies. A failure to replicate could mean that the original study was flawed, the new experiment was poorly done or the effect under scrutiny varies between settings or groups of people. To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries. The team chose a mix of effects that represent the diversity of psychological science, from classic experiments that have been repeatedly replicated to contemporary ones that have not. Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested. © 2013 Nature Publishing Group
by Anil Ananthaswamy Can you tickle yourself if you are fooled into thinking that someone else is tickling you? A new experiment says no, challenging a widely accepted theory about how our brains work. It is well known that we can't tickle ourselves. In 2000, Sarah-Jayne Blakemore of University College London (UCL) and colleagues came up with a possible explanation. When we intend to move, the brain sends commands to the muscles, but also predicts the sensory consequences of the impending movement. When the prediction matches the actual sensations that arise, the brain dampens down its response to those sensations. This prevents us from tickling ourselves (NeuroReport, DOI: 10.1097/00001756-200008030-00002). Jakob Hohwy of Monash University in Clayton, Australia, and colleagues decided to do a tickle test while simultaneously subjecting people to a body swap illusion. In this illusion, the volunteer and experimenter sat facing each other. The subject wore goggles that displayed the feed from a head-mounted camera. In some cases the camera was mounted on the subject's head, so that they saw things from their own perspective, while in others it was mounted on the experimenter's head, providing the subject with the experimenter's perspective. Using their right hands, both the subject and the experimenter held on to opposite ends of a wooden rod, which had a piece of foam attached to each end. The subject and experimenter placed their left palms against the foam at their end. Next, the subject or the experimenter took turns to move the rod with their right hand, causing the piece of foam to tickle both of their left palms. © Copyright Reed Business Information Ltd.
By Daisy Grewal How good are you at multi-tasking? The way you answer that question may tell you more than you think. According to recent research, the better people think they are at multitasking, the worse they actually are at it. And the more that you think you are good at it, the more likely you are to multi-task when driving. Maybe the problem of distracted driving has less to do with the widespread use of smartphones and more to do with our inability to recognize our own limits. A study by David Sanbonmatsu and his colleagues looked at the relationship between people’s beliefs about their own multi-tasking ability and their likelihood of using a cell phone when driving. Importantly, the study also measured people’s actual multi-tasking abilities. The researchers found that people who thought they were good at multi-tasking were actually the worst at it. They were also the most likely to report frequently using their cell phones when driving. This may help explain why warning people about the dangers of cell phone use when driving hasn’t done much to curb the behavior. The study is another reminder that we are surprisingly poor judges of our own abilities. Research has found that people overestimate their own qualities in a number of areas including intelligence, physical health, and popularity. Furthermore, the worse we are at something, the more likely we may be to judge ourselves as competent at it. Psychologists David Dunning and Justin Kruger have studied how incompetence, ironically, is often the result of not being able to accurately judge one’s own incompetence. In one study, they found that people who scored the lowest on tests of grammar and logic were the most likely to overestimate their own abilities. The reverse was also true: the more competent people were most likely to underestimate their abilities. And multi-tasking may be just yet another area where incompetence breeds over-confidence. © 2013 Scientific American
Why do some people feel as though one of their body parts is not truly part of them and go to crazy lengths to get rid of it? Paul D. McGeochanswers: Certain people hold a deep desire to amputate a healthy limb. They are not psychotic, and they fully realize that what they want is abnormal. Nevertheless, they have felt from childhood that the presence of a specific limb, usually a leg, somehow makes their body “overcomplete.” Ultimately, many will achieve their desired amputation through self-inflicted damage or surgery. During the past few years my work with neuroscientists Vilayanur S. Ramachandran of U.C.S.D. and David Brang of Northwestern University, along with research by neuroscientist Peter Brugger of University Hospital Zurich in Switzerland, has transformed our understanding of this condition. Our findings suggest that a dysfunction of specific brain areas on the right side of the brain, which are involved in generating our body image, may explain the desire. Bizarre disorders of body image have long been known to arise after a stroke or other incident inflicts damage to the right side of the brain, particularly in the parietal lobe. The right posterior parietal cortex seems to combine several incoming streams of information—touch, joint position sense, vision and balance—to form a dynamic body image that changes as we interact with the world around us. In brain scans, we have found this exact part of the right parietal lobe to activate abnormally in individuals desiring limb removal. Because the primary sensory areas of the brain still function normally, sufferers are able to see and feel the limb in question. Yet they do not experience it as part of their body because the right posterior parietal lobe fails to adequately represent it. The mismatch between a person's actual physical body and his or her body image seems to cause ongoing arousal in the sympathetic nervous system, which may intensify the desire to remove the limb. Given that sufferers date these feelings to childhood, the right parietal dysfunction most likely is congenital or arises in early development. © 2013 Scientific American
Katherine Harmon Courage An infant's innate sense for numbers predicts how their mathematical aptitude will develop years later, a team of US researchers has found. Babies can spot if a set of objects increases or decreases in number — for instance, if the number of dots on a screen grows, even when dot size, colour and arrangement also change. But until recently, researchers could generally only determine the number sense of groups of babies, thus ruling out the ability to correlate this with later mathematics skills in individuals. In 2010, Elizabeth Brannon, a neuroscientist at Duke University in Durham, North Carolina, and her colleagues demonstrated that they could test and track infants' number sense over time1. To do this, six-month-old babies are presented with two screens. One shows a constant number of dots, such as eight, changing in appearance, and the other also shows changing dots but presents different numbers of them — eight sometimes and 16 other times, for instance. An infant who has a good primitive number sense will spend more time gazing at the screen that presents the changing number of dots. In the latest work, which is published in this week's Proceedings of the National Academy of Sciences2, Brannon's team took a group of 48 children who had been tested at six months of age and retested them three years later, using the same dot test but also other standard maths tests for preschoolers — including some that assessed the ability to count, to tell which of two numbers is larger and to do basic calculations. © 2013 Nature Publishing Group
Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 14: Attention and Consciousness
Link ID: 18822 - Posted: 10.22.2013
by Helen Thomson ONE moment you are alive. The next you are dead. A few hours later and you are alive again. Pharmacologists have discovered a mechanism that triggers Cotard's syndrome – the mysterious condition that leaves people feeling like they, or parts of their body, no longer exist. With the ability to switch the so-called walking corpse syndrome on and off comes the prospect of new insights into how conscious experiences are constructed. Acyclovir – also known by the brand name Zovirax – is a common drug used to treat cold sores and other herpes infections. It usually has no harmful side effects. However, about 1 per cent of people who take the drug orally or intravenously experience some psychiatric side effects, including Cotard's. These occur mainly in people who have renal failure. To investigate the potential link between acyclovir and Cotard's, Anders Helldén at Karolinska University Hospital in Stockholm and Thomas Lindén at the Sahlgrenska Academy in Gothenburg pooled data from Swedish drug databases along with hospital admissions. They identified eight people with acyclovir-induced Cotard's. One woman with renal failure began using acyclovir to treat shingles. She ran into a hospital screaming, says Helldén. After an hour of dialysis, she started to talk: she said the reason she was so anxious was that she had a strong feeling she was dead. After a few more hours of dialysis she said, "I'm not quite sure whether I'm dead any more but I'm still feeling very strange." Four hours later: "I'm pretty sure I'm not dead any more but my left arm is definitely not mine." Within 24 hours, the symptoms had disappeared. © Copyright Reed Business Information Ltd.
by Colin Barras A part of all of us loves sums. Eavesdropping on the brain while people go about their daily activity has revealed the first brain cells specialised for numbers. Josef Parvizi and his colleagues at Stanford University in California enlisted the help of three people with epilepsy whose therapy involved placing a grid of electrodes on the surface of their brain that record activity. Neurons fired in a region called the intraparietal sulcus when the three volunteers performed arithmetic tests, suggesting they dealt with numbers. The team continued to monitor brain activity while the volunteers went about their normal activity in hospital. Comparing video footage of their stay with their brain activity (see video, above) revealed that the neurons remained virtually silent for most of the time, bursting into life only when the volunteers talked about numbers or numerical concepts such as "more than" or "less than". There is debate over whether some neural populations perform many functions or are involved in very precise tasks. "We show here that there is specialisation for numeracy," says Parvizi. Journal reference: Nature Communications, DOI: 10.1038/ncomms3528 © Copyright Reed Business Information Ltd.
Many people, I've heard talk, wonder what's going on inside Republican speaker John Boehner's brain. For cognitive neuroscientists, Boehner's brain is a case study. At the same time, others are frustrated with Democrat Harry Reid. The Senate Majority leader needs to take a tip from our founding fathers. Many of the intellectual giants who founded our democracy were both statesmen and scientists, and they applied the latest in scientific knowledge of their day to advantage in governing. The acoustics of the House of Representatives, now Statuary Hall, allowed John Quincy Adams and his comrades to eavesdrop on other members of congress conversing in whispers on the opposite side of the parabolic-shaped room. Senator Reid, in stark contrast, is still applying ancient techniques used when senators wore togas -- reason and argument -- and we all know how badly that turned out. The search for a path to compromise can be found in the latest research on the neurobiological basis of social behavior. Consider this new finding just published in the journal Brain Research. Oxytocin, a peptide produced in the hypothalamus of the brain and known to cement the strong bond between mother and child at birth, has been found to promote compromise in rivaling groups! This new research suggests that Congresswoman Nancy Pelosi could single-handedly end the Washington deadlock by spritzing a bit of oxytocin in her perfume and wafting it throughout the halls of congress. One can only imagine the loving effect this hormone would have on Senate Republican Ted Cruz, suddenly overwhelmed with an irresistible urge to bond with his colleagues, fawning for a cozy embrace like a babe cuddling in its mother's arms. And it is so simple! No stealthy spiking the opponent's coffee (or third martini at lunch) would be required, oxytocin works when it is inhaled through the nasal passages as an odorless vapor. © 2013 TheHuffingtonPost.com, Inc.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 1: Biological Psychology: Scope and Outlook
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 1: An Introduction to Brain and Behavior
Link ID: 18761 - Posted: 10.08.2013
By Bruce Bower Cartoon ghosts have scared up evidence that the ability to visualize objects in one’s mind materializes between ages 3 and 5. When asked to pick which of two mirror-image ghost cutouts or drawings fit in a ghost-shaped hole, few 3-year-olds, a substantial minority of 4-year-olds and most 5-year-olds regularly succeeded, say psychologist Andrea Frick of the University of Bern, Switzerland, and her colleagues. Girls performed as well as boys on the task, suggesting that men’s much-studied advantage over women in mental rotation doesn’t emerge until after age 5, the researchers report Sept. 17 in Cognitive Development. Mental rotation is a spatial skill regarded as essential for science and math achievement. Most tasks that researchers use to assess mental rotation skills involve pressing keys to indicate whether block patterns oriented at different angles are the same or different. That challenge overwhelms most preschoolers. Babies apparently distinguish block patterns from mirror images of those patterns (SN: 12/20/08, p. 8), but it’s unclear whether that ability enables mental rotation later in life. Frick’s team studied 20 children at each of three ages, with equal numbers of girls and boys. Youngsters saw two ghosts cut out of foam, each a mirror image of the other. Kids were asked to turn the ghosts in their heads and choose the one that would fit like a puzzle piece into a ghost’s outline on a board. Over seven trials, the ghosts were tilted at angles varying from the position of the outline. The researchers used three pairs of ghost cutouts, for a total of 21 trials. © Society for Science & the Public 2000 - 2013
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 8: Hormones and Sex
Link ID: 18706 - Posted: 09.26.2013
By Neuroskeptic Neuroscientists are interested in how brains interact socially. One of the main topics of study is ‘mentalizing’ aka ‘theory of mind’, the ability to accurately attribute mental states – such as beliefs and emotions – to other people. It is widely believed that the brain has specific areas for this – i.e. social “modules” (although today most neuroscientists are shy about using that word, it’s basically what’s at issue.) But two new papers out this week suggest that people can still mentalize successfully after damage to “key parts of the theory of mind network”. Herbet et al, writing in Cortex, showed few effects of surgical removal of the right frontal lobe in 10 brain tumour patients. On two different mentalizing tasks, they showed that removal caused either no decline in performance, or only a transient one. Meanwhile Michel et al report that the left temporal pole is dispensable for mentalizing as well, in a single case report in the Journal of Cognitive Neuroscience. They describe a patient suffering from frontotemporal dementia (FTD), whose left temporal lobe was severely atrophied. He’d lost the use of language, but he did quite normally on theory of mind tests adapted to be non-linguistic. In both papers, these patients don’t have those parts of the brain that are most activated in fMRI studies of mentalizing. Where the blobs on the brain normally go, they have no brain.
By Melissa Hogenboom Science reporter, BBC News Smaller animals tend to perceive time in slow-motion, a new study has shown. This means that they can observe movement on a finer timescale than bigger creatures, allowing them to escape from larger predators. Insects and small birds, for example, can see more information in one second than a larger animal such as an elephant. The work is published in the journal Animal Behaviour. "The ability to perceive time on very small scales may be the difference between life and death for fast-moving organisms such as predators and their prey," said lead author Kevin Healy, at Trinity College Dublin (TCD), Ireland. The reverse was found in bigger animals which may miss things that smaller creatures can rapidly spot. In humans, too, there is variation among individuals. Athletes, for example, can often process visual information more quickly. An experienced goalkeeper would therefore be quicker than others in observing where a ball comes from. The speed at which humans absorb visual information is also age-related, said Andrew Jackson, a co-author of the work at TCD. "Younger people can react more quickly than older people, and this ability falls off further with increasing age." The team looked at the variation of time perception across a variety of animals. They gathered datasets from other teams who had used a technique called critical flicker fusion frequency, which measures the speed at which the eye can process light. BBC © 2013
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 18651 - Posted: 09.16.2013
By Josh Shaffer DURHAM It’s not often that the high-minded world of neuroscience collides with the corny, old-fashioned art of ventriloquism. One depends on dummies; the other excludes them. But a Duke University study uses puppet-based comedy to demonstrate the complicated inner workings of the brain and shows what every ventriloquist knows: The eye is more convincing than the ear. The study, which appears in the journal PLOS ONE, seeks to explain how the brain combines information coming from two different senses. How, asks Duke psychology and neuroscience professor Jennifer Groh, does the brain determine where a sound is coming from? In your eyes, the retina takes a snapshot, she said. It makes a topographic image of what’s in front of you. But the ears have nothing concrete to go on. They have to rely on how loud the sound is, how far away and from what direction. That’s where a ventriloquist comes in, providing a model for this problem. With a puppet, the noise and the movement are coming from different places. So how does the brain fix this and choose where to look? Duke researchers tested their hypotheses on 11 people and two monkeys, placing them in a soundproof booth.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18619 - Posted: 09.09.2013
by Colin Barras Familiarity may breed contempt, and it also makes it easier to ignore our nearest and dearest. The human brain has an uncanny ability to focus on one voice in a sea of chatterSpeaker, for example, at a party, but exactly how it does so is still up for debate. "In the past, people have looked at the acoustic characteristics that enable the brain to do this," says Ingrid Johnsrude at Queen's University in Kingston, Ontario, Canada. "Things like differences in voice pitch or its timbre." Johnsrude and her colleagues wondered if the familiarity of the voice also plays a role. Can people focus on one voice in a crowd more effectively if it belongs to a close relation? And is a familiar voice more easily ignored if we want to listen to someone else? To find out, the team recruited 23 married couples. Each had been married and living together for at least 18 years. Individuals were played two sentences simultaneously and asked to report back details about one of them, such as the colour and number mentioned. They did this correctly 80 per cent of the time when their spouse spoke the target sentence and a stranger spoke the decoy sentence. If strangers spoke both, the success rate dropped to 65 per cent. © Copyright Reed Business Information Ltd
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18584 - Posted: 08.31.2013
Elizabeth Norton Brain cells, like Henry Higgins in My Fair Lady, grow accustomed to a familiar face—so much so that repeatedly viewing a distorted face will make the normal face look odd. This process, known as visual adaptation, is enhanced by sleep and may be an essential component of memory, a new study finds. After multiple exposures to a striking visual pattern, neurons in the retina and visual cortex of the brain fire less frequently the next time you see the pattern. By devoting less energy to familiar sights, the brain is free to concentrate on the next new thing that comes along; the original image becomes a routine perception. Scientists think that this allocation of mental resources is crucial to our ability to perceive and interpret our surroundings. Whether visual adaptation is a prelude to memory formation is another question, one that intrigued cognitive neuroscientist Thomas Ditye of University College London. Because sleep strengthens memory, Ditye and colleagues decided to test whether visual adaptation also improves after some shuteye. The researchers asked a group to view a computer screen on which distorted images of the faces of actors George Clooney and Angelina Jolie flashed for periods of 0.5 to 6 seconds. The images were “extended”—stretched until they achieved the blown-up look of a fun house mirror. The object of the test was to determine whether the brain would adapt to images and begin seeing the distorted faces as normal. The volunteers, however, believing their reaction time was being tested, merely pressed a button whenever they saw the image. © 2012 American Association for the Advancement of Science
By Susan Gaidos If you’re someone who enjoys being recognized, Julian Lim is your kind of waiter. Lim, who’s working his way through college waiting tables, remembers the face of everyone that walks through the door of the South Bend, Ind., restaurant where he works. His abilities go beyond making his customers feel special. This spring, when he cut his hand on broken glass, he pegged the emergency room nurse as a fellow student from his grade school days. Though they’d never spoken, and the girl had since undergone changes in appearance, Lim recognized her instantly. Carrie Shanafelt is good with faces, too. A professor of literature at Grinnell College in Iowa, Shanafelt can spot her students outside the classroom, whether it’s the first week of class or years later. And Ajay Jansari, an information technology specialist in London, often has to see a face only once to remember it, even those he meets thousands of miles from home. While some people say they never forget a face, these folks have scientific studies to back their claims. Called “super recognizers,” they’re among a small group of individuals being studied by scientists at Dartmouth College and in England to better understand how some people can recognize almost every face they have ever seen. Scientists are now putting super recognizers’ skills to the test to get a handle on how face-processing areas of the brain work to make a few people so adept at recalling faces. Findings from the studies may advance understanding of how most people categorize faces — a subject that is still poorly understood. © Society for Science & the Public 2000 - 2013