Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Deborah Kotz / Globe Staff Anyone who hears about the tragic death of a 13-year-old California girl after a routine tonsil-removal surgery has to feel for the grieving parents who don’t want her removed from life support. The McMaths refuse to believe that their daughter Jahi, who was declared brain dead more than a week ago, is truly dead because machines are keeping her other organs alive. “How could you not let me have my kid for Christmas?” said Nailah Winkfield, McMath’s mother, in an interview with local reporters. “And this is Children’s Hospital, supposed to be so compassionate, so loving, and I asked, can my daughter just live a few more days? Because she is living.” McMath was declared brain dead more than a week ago, and her family has been fighting with hospital staff at Children’s Hospital & Research Center in Oakland to keep her body in a viable state and have her provided with nutrition via a feeding tube. “To me, it just looks like she’s at peace and she’s resting,” said Jahi’s uncle Omari Sealey, “and when she’s done going through the traumatic stuff that her body’s going through right now, and she feels well enough, she’ll wake up.” But McMath is dead—as horrible as that is for her family to fathom—and leaving her body attached to machines is akin to allowing a corpse remain in a hospital bed without a proper burial. Perhaps hospitals should stop calling such care “life support” since it’s not actually supporting any living person, just a body. “This case is so sad it is almost beyond description,” wrote Arthur Caplan, head of the division of medical ethics at NYU Langone Medical Center in a blog he posted Thursday on the NBC News website. “But that fact should not be a reason to take the view that we don’t know what to do when someone is pronounced brain dead. Brain dead is dead.” © 2013 Boston Globe Media Partners, LLC
Link ID: 19057 - Posted: 12.21.2013
By Greg Miller John McCluskey killed a vacationing couple in eastern New Mexico in 2010, set their camper trailer on fire with their bodies inside, and took off with their truck. In sentencing hearings held after his conviction, McCluskey’s lawyers argued that he should be spared the death penalty because abnormalities in his brain had made him impulsive and unable to control his behavior. Last week, a jury declared it had been unable to reach the unanimous decision required to sentence him to death. It’s not known if the brain scans and other scientific evidence played a role in McCluskey escaping the death penalty. And it’s not the first time such evidence has been introduced when the death penalty was on the line. In fact, neuroscience is making increasingly regular courtroom appearances. “It’s amazing the extent to which judges, attorneys, and juries are taking this in stride,” said Owen Jones, a legal scholar at Vanderbilt University who observed a few hours of testimony in McCluskey’s case. “Just a few generations ago, this was beyond the realm of science fiction,” Jones said. But now, “you watch the jurors and they reflect no outward manifestation of what an extraordinary thing it is to look inside another person’s brain.” ‘It’s amazing the extent to which judges, attorneys, and juries are taking this in stride.’ Nita Farahany, a bioethicist at Duke University has been tracking the rise of legal cases involving neuroscience evidence in the U.S. The number of judicial opinions mentioning neuroscience evidence tripled between 2005 and 2011, from roughly 100 to more than 300. “It’s more prevalent than my numbers show,” Farahany said. That’s because most cases involving neuroscience evidence do not result in a written judicial opinion, and those that don’t are exceedingly difficult to find. © 2013 Condé Nast.
By Suzanne Allard Levingston, Chris Ecarius had so much difficulty filling out his Social Security application online that the 62-year-old went to a doctor to find out why his brain didn’t seem to work properly. Over the years, he’d seen other doctors about similar struggles. He’d been told that he was depressed, but he didn’t feel depressed. This time, Ecarius got a different diagnosis: attention deficit hyperactivity disorder, a conclusion that seemed more appropriate for a child in grade school than an adult in retirement. When Ecarius, who lives in Houghton Lake, Mich., was young, he had trouble paying attention. He’d dropped out of school and left several jobs, had several traffic accidents and had never quite gotten on track. “I could have been a doctor,” he said. “I could have been a pharmacist, I could have been anything I wanted to be,” had someone diagnosed his ADHD when he was a child. With the help of his wife, Ecarius was able to settle into a skilled trade job with General Motors, a position he held until age 58, when, he says, he became overwhelmed by the computers at work. Ecarius is not alone. While ADHD — a condition marked by inattention, hyperactivity and impulsivity — is one of the most common brain disorders in children, it also occurs in approximately one in 20 adults, according to a 2006 study. A 2012 study based on interviews with almost 1,500 people by researchers in the Netherlands found that 2.8 percent of adults older than 60 have ADHD, with 4.2 percent of people in that age group reporting several ADHD symptoms and some impairment. But just being forgetful or scatterbrained doesn’t mean you have ADHD. Of course, many people, especially those older than 60, have these problems, but they could be a sign of something else — or nothing at all. © 1996-2013 The Washington Post
Link ID: 19042 - Posted: 12.17.2013
By ALAN SCHWARZ After more than 50 years leading the fight to legitimize attention deficit hyperactivity disorder, Keith Conners could be celebrating. Severely hyperactive and impulsive children, once shunned as bad seeds, are now recognized as having a real neurological problem. Doctors and parents have largely accepted drugs like Adderall and Concerta to temper the traits of classic A.D.H.D., helping youngsters succeed in school and beyond. But Dr. Conners did not feel triumphant this fall as he addressed a group of fellow A.D.H.D. specialists in Washington. He noted that recent data from the Centers for Disease Control and Prevention show that the diagnosis had been made in 15 percent of high school-age children, and that the number of children on medication for the disorder had soared to 3.5 million from 600,000 in 1990. He questioned the rising rates of diagnosis and called them “a national disaster of dangerous proportions.” “The numbers make it look like an epidemic. Well, it’s not. It’s preposterous,” Dr. Conners, a psychologist and professor emeritus at Duke University, said in a subsequent interview. “This is a concoction to justify the giving out of medication at unprecedented and unjustifiable levels.” The rise of A.D.H.D. diagnoses and prescriptions for stimulants over the years coincided with a remarkably successful two-decade campaign by pharmaceutical companies to publicize the syndrome and promote the pills to doctors, educators and parents. With the children’s market booming, the industry is now employing similar marketing techniques as it focuses on adult A.D.H.D., which could become even more profitable. Few dispute that classic A.D.H.D., historically estimated to affect 5 percent of children, is a legitimate disability that impedes success at school, work and personal life. Medication often assuages the severe impulsiveness and inability to concentrate, allowing a person’s underlying drive and intelligence to emerge. © 2013 The New York Times Company
Oliver Burkeman As we stumble again into the season of overindulgence – that sacred time of year when wine, carbs and sofas replace brisk walks for all but the most virtuous – a headline in the (excellent) new online science magazine Nautilus catches my eye: "What If Obesity Is Nobody's Fault?" The article describes new research on mice: a genetic alteration, it appears, can make them obese, despite eating no more than others. "Many of us unfortunately have had an attitude towards obese people [as] having a lack of willpower or self-control," one Harvard researcher is quoted as saying. "It's clearly something beyond that." No doubt. But that headline embodies an assumption that's rarely questioned. Suppose, hypothetically, obesity were solely a matter of willpower: laying off the crisps, exercising and generally bucking your ideas up. What makes us so certain that obesity would be the fault of the obese even then? This sounds like the worst kind of bleeding-heart liberalism, a condition from which I probably suffer (I blame my genes). But it's a real philosophical puzzle, with implications reaching far beyond obesity to laziness in all contexts, from politicians' obsession with "hardworking families" to the way people beat themselves up for not following through on their plans. We don't blame people for most physical limitations (if you broke your leg, it wouldn't be a moral failing to cancel your skydiving trip), nor for many other impediments: it's hardly your fault if you're born into educational or economic disadvantage. Yet almost everyone treats laziness and weakness of will as exceptions. If you can't be bothered to try, you've only yourself to blame. It's a rule some apply most harshly to themselves, mounting epic campaigns of self-chastisement for procrastinating, failing to exercise and so on. © 2013 Guardian News and Media Limited
By MAGGIE KOERTH-BAKER More than a decade ago, a 43-year-old woman went to a surgeon for a hysterectomy. She was put under, and everything seemed to be going according to plan, until, for a horrible interval, her anesthesia stopped working. She couldn’t open her eyes or move her fingers. She tried to breathe, but even that most basic reflex didn’t seem to work; a tube was lodged in her throat. She was awake and aware on the operating table, but frozen and unable to tell anyone what was happening. Studies of anesthesia awareness are full of such horror stories, because administering anesthesia is a tightrope walk. Too much can kill. But too little can leave a patient aware of the procedure and unable to communicate that awareness. For every 1,000 people who undergo general anesthesia, there will be one or two who are not as unconscious as they seem — people who remember their doctors talking, and who are aware of the surgeon’s knife, even while their bodies remain catatonic and passive. For the unlucky 0.13 percent for whom anesthesia goes awry, there’s not really a good preventive. That’s because successful anesthetization requires complete unconsciousness, and consciousness isn’t something we can measure. There are tools that anesthesiologists use to get a pretty good idea of how well their drugs are working, but these systems are imperfect. For most patients receiving inhaled anesthesia, they’re no better at spotting awareness than dosing metrics developed half a century ago, says George Mashour, a professor of anesthesiology at the University of Michigan Medical School. There are two intertwined mysteries at work, Mashour told me: First, we don’t totally understand how anesthetics work, at least not on a neurological basis. Second, we really don’t understand consciousness — how the brain creates it, or even what, exactly, it is. © 2013 The New York Times Company
By Graham Lawton Patricia Churchland, a neurophilosopher at the University of California at San Diego, says our hopes, loves and very existence are just elaborate functions of a complicated mass of grey tissue. Accepting that can be hard, but what we know should inspire us, not scare us. Her most recent book is Touching a Nerve: The Self as Brain. Graham Lawton: You compare revelations in neuroscience with the discoveries that the Earth goes around the sun and that the heart is a pump. What do you think these ideas have in common? Patricia Churchland: They challenge a whole framework of assumptions about the way things are. For Christians, it was very important that the Earth was at the center of the universe. Similarly, many people believed that the heart was somehow what made us human. And it turned out it was just a pump made of meat. I think the same is true about realizing that when we're conscious, when we make decisions, when we go to sleep, when we get angry, when we're fearful, these are just functions of the physical brain. Coming to terms with the neural basis of who we are can be very unnerving. It has been called "neuroexistentialism," which really captures the essence of it. We're not in the habit of thinking about ourselves that way. GL: Why is it so difficult for us to see the reality of what we actually are? PC: Part of the answer has to do with the evolution of nervous systems. Is there any reason for a brain to know about itself? We can get along without knowing, just as we can get along without knowing that the liver is in there filtering out toxins. The wonderful thing, of course, is that science allows us to know. © 2013 The Slate Group, LLC.
Link ID: 19024 - Posted: 12.11.2013
by Bob Holmes Perseverance in the face of adversity is an admirable character trait – now it turns out you can conjure it up with a quick zap to a tiny spot in the brain. The discovery in two people with epilepsy was accidental but it is the first to show that simple brain stimulation can create rich, complex alterations of consciousness. Josef Parvizi, a neurologist at Stanford University in California, and his colleagues had implanted electrodes in the brains of two people with epilepsy to help identify the source of their seizures. In the course of their work, they noticed that an odd thing happened when they stimulated a region in the anterior midcingulate cortex – a part of the limbic system involved in emotion, processing, learning and memory. Both patients reported feeling a sense of foreboding, coupled with a determination to overcome whatever challenge they were about to face. During the stimulation, one patient reported feeling "worried that something bad is going to happen" but also noted that "it made me stronger". The other said he felt as if he were figuring out how to get through something. He likened it to driving your car when one of the tires bursts. You're only halfway to your destination and you have no option but to keep going forward. "You're like… am I gonna get through this?" he said (see video). He also reported a sense of urgency: "It was more of a positive thing like… push harder, push harder, push harder to try and get through this." One singular sensation In contrast, when the researchers applied a sham stimulation – going through exactly the same procedure, but with the current set to zero – neither volunteer reported feeling any specific sensations. Stimulation of other nearby regions of the brain less than 5 millimetres away also failed to produce the feelings of either foreboding or perseverance. © Copyright Reed Business Information Ltd.
By PAUL BLOOM In 1780, Immanuel Kant wrote that “sexual love makes of the loved person an Object of appetite.” And after that appetite is sated? The loved one, Kant explained, “is cast aside as one casts away a lemon which has been sucked dry.” Many contemporary feminists agree that sexual desire, particularly when elicited by pornographic images, can lead to “objectification.” The objectifier (typically a man) thinks of the target of his desire (typically a woman) as a mere thing, lacking autonomy, individuality and subjective experience. This idea has some laboratory support. Studies have found that viewing people’s bodies, as opposed to their faces, makes us judge those people as less intelligent, less ambitious, less competent and less likable. One neuroimaging experiment found that, for men, viewing pictures of sexualized women induced lowered activity in brain regions associated with thinking about other people’s minds. The objectification thesis also sits well with another idea that many psychologists, including myself, have defended, which is that we are all common-sense dualists. Even if you are a staunch science-minded atheist, in everyday life you still think of people as immaterial conscious beings — we inhabit fleshy bodies, but we are not ourselves physical. To see someone as a body is in opposition to thinking of her as a mind, then, and hence a heightened focus on someone’s body tends to strip away her personhood. But this analysis is too simple. It’s not literally true that women in pornography are thought of as inanimate and unfeeling objects; if they were, then they would just as effectively be depicted as unconscious or unresponsive, as opposed to (as is more often the case) aroused and compliant. Also, as the philosophers Martha Nussbaum and Leslie Green have pointed out, being treated as an object isn’t necessarily a bad thing. Imagine that you are sitting outside on a sunny day, and you move behind someone so that she blocks the sun from your eyes. You have used her as an object, but it’s hard to see that you’ve done something wrong. © 2013 The New York Times Company
by Bethany Brookshire Most people take it as a given that distraction is bad for — oh, hey, a squirrel! Where was I? … Right. Most people take it as a given that distraction is bad for memory. And most of the time, it is. But under certain conditions, the right kind of distraction might actually help you remember. Nathan Cashdollar of University College London and colleagues were looking at the effects of distraction on memory in memory-impaired patients. They were specifically looking at distractions that were totally off-topic from a particular task, and how those distractions affected memory performance. Their results were published November 27 in the Journal of Neuroscience. The researchers worked with a small group of people with severe epilepsy who had lesions in the hippocampus, and therefore had memory problems. They compared them to groups of people with epilepsy without lesions, young healthy people, and older healthy people that were matched to the epilepsy group. Each of the participants went through a memory task called “delayed match-to-sample.” For this task, participants are given a set of samples or pictures, usually things like nature scenes. Then there’s a delay, from one second at the beginning of the test on up to nearly a minute. Then participants are shown another nature scene. Is it one they have seen before? Yes or no? The task starts out simply, with only one nature scene to match, but soon becomes harder, with up to five pictures to remember, and a five-second delay. People with memory impairments did a lot worse when they had more items to remember (called high cognitive load), falling off very steeply in their performance. Normal controls did better, still remaining fairly accurate, but making mistakes once in a while. © Society for Science & the Public 2000 - 2013.
By Emilie Reas Did you make it to work on time this morning? Go ahead and thank the traffic gods, but also take a moment to thank your brain. The brain’s impressively accurate internal clock allows us to detect the passage of time, a skill essential for many critical daily functions. Without the ability to track elapsed time, our morning shower could continue indefinitely. Without that nagging feeling to remind us we’ve been driving too long, we might easily miss our exit. But how does the brain generate this finely tuned mental clock? Neuroscientists believe that we have distinct neural systems for processing different types of time, for example, to maintain a circadian rhythm, to control the timing of fine body movements, and for conscious awareness of time passage. Until recently, most neuroscientists believed that this latter type of temporal processing – the kind that alerts you when you’ve lingered over breakfast for too long – is supported by a single brain system. However, emerging research indicates that the model of a single neural clock might be too simplistic. A new study, recently published in the Journal of Neuroscience by neuroscientists at the University of California, Irvine, reveals that the brain may in fact have a second method for sensing elapsed time. What’s more, the authors propose that this second internal clock not only works in parallel with our primary neural clock, but may even compete with it. Past research suggested that a brain region called the striatum lies at the heart of our central inner clock, working with the brain’s surrounding cortex to integrate temporal information. For example, the striatum becomes active when people pay attention to how much time has passed, and individuals with Parkinson’s Disease, a neurodegenerative disorder that disrupts input to the striatum, have trouble telling time. © 2013 Scientific American
Ed Yong A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced. Psychology has been buffeted in recent years by mounting concern over the reliability of its results, after repeated failures to replicate classic studies. A failure to replicate could mean that the original study was flawed, the new experiment was poorly done or the effect under scrutiny varies between settings or groups of people. To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries. The team chose a mix of effects that represent the diversity of psychological science, from classic experiments that have been repeatedly replicated to contemporary ones that have not. Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested. © 2013 Nature Publishing Group
Link ID: 18974 - Posted: 11.26.2013
By James Gallagher Health and science reporter, BBC News Steroids given to help premature babies develop may also be slightly increasing the risk of mental health disorders, say researchers. The drugs are often given to pregnant mothers at risk of a premature birth to help the baby's lungs prepare for life outside the womb. The study, in the journal PLoS One, showed there was a higher risk of attention disorders at age eight. The charity Bliss said it reinforced the need for regular health checks. Being born too soon can lead to long-term health problems and the earlier the birth the greater the problems. One immediate issue is the baby's lungs being unprepared to breathe air. Steroids can help accelerate lung development. However, the study by researchers at Imperial College London and the University of Oulu in Finland showed the drugs may also be affecting the developing brain. They compared what happened to 37 premature children whose mother was injected with steroids with 185 premature children, of the same weight and gestational age, who were not exposed to the extra dose of steroid. When the children were followed to the age of eight, there was a higher incidence of attention deficit hyperactivity disorder. No difference could be detected at age 16, but this may have been due to the small size of the study. BBC © 2013
By Neuroskeptic I am sitting reading a book. After a while, I get up and make a cup of coffee. I’ve been thinking about this scenario lately as I’ve pondered ‘what remains to be discovered’ in our understanding the brain. By this I mean, what (if anything) prevents neuroscience from at least sketching out an explanation for all of human behaviour? A complete explanation of any given behaviour – such as my reading a particular book – would be impossible, as it would require detailed knowledge of all my brain activity. But neuroscience could sketch an account of some stages of the reading. We have models for how my motor cortex and cerebellum might coordinate my fingers to turn the pages of my book. Other models try to make sense of the recognition of the letters by my visual cortex. This is what I mean by ‘beginning to account for’. We have theories that are not wholly speculative. While we don’t yet have the whole story of motor control or visual perception, we have made a start. Yet I’m not sure that we can even begin to explain: why did I stop what I was doing, get up, and make coffee at that particular time? The puzzle, it seems, does not lie in my actual choice to make some coffee (as opposed to not making it.) We could sketch an explanation for how, once the mental image (memory) of coffee ‘crossed my mind’, that image set off dopamine firing (i.e. I like coffee), and this dopamine, acting on corticostriatal circuits, selected the action of making coffee over the less promising alternatives. But why did that mental image of coffee cross my mind in the first place? And why did it do so just then, not thirty seconds before or afterwards?
Link ID: 18957 - Posted: 11.23.2013
By Gary Stix The emerging academic discipline of neuroethics has been driven, in part, by the recognition that introducing brain scans as legal evidence is fraught with peril. Most neuroscientists think that a brain scan is unable to provide an accurate representation of the state of mind of a defendant or determine whether his frontal lobes predispose to some wanton action. The consensus view holds that studying spots on the wrinkled cerebral cortex that are bigger or smaller in some criminal offenders may hint at overarching insights into the roots of violence, but lack the requisite specificity to be used as evidence in any individual case. “I believe that our behavior is a production of activity in our brain circuits,” Steven E. Hyman of the Broad Institute of Harvard and MIT told a session at the American Association for the Advancement of Science’s annual meeting earlier this year. “But I would never tell a parole board to decide whether to release somebody or hold on to somebody, based on their brain scan as an individual, because I can’t tell what are the causal factors in that individual.” It doesn’t seem to really matter, though, what academic experts believe about the advisability of brain scans as Exhibit One at trial. The entry of neuroscience in the courtroom has already begun, big time. The introduction of a brain scan in a legal case was once enough to generate local headlines. No more. Hundreds of legal opinions each year have begun to invoke the science of mind and brain to bolster legal arguments—references not only to brain scans, but a range of studies that show that the amygdala is implicated in this or the anterior cingulate cortex is at fault for that. The legal establishment, in short, has begun a love affair with all things brain. © 2013 Scientific American
by Anil Ananthaswamy Can you tickle yourself if you are fooled into thinking that someone else is tickling you? A new experiment says no, challenging a widely accepted theory about how our brains work. It is well known that we can't tickle ourselves. In 2000, Sarah-Jayne Blakemore of University College London (UCL) and colleagues came up with a possible explanation. When we intend to move, the brain sends commands to the muscles, but also predicts the sensory consequences of the impending movement. When the prediction matches the actual sensations that arise, the brain dampens down its response to those sensations. This prevents us from tickling ourselves (NeuroReport, DOI: 10.1097/00001756-200008030-00002). Jakob Hohwy of Monash University in Clayton, Australia, and colleagues decided to do a tickle test while simultaneously subjecting people to a body swap illusion. In this illusion, the volunteer and experimenter sat facing each other. The subject wore goggles that displayed the feed from a head-mounted camera. In some cases the camera was mounted on the subject's head, so that they saw things from their own perspective, while in others it was mounted on the experimenter's head, providing the subject with the experimenter's perspective. Using their right hands, both the subject and the experimenter held on to opposite ends of a wooden rod, which had a piece of foam attached to each end. The subject and experimenter placed their left palms against the foam at their end. Next, the subject or the experimenter took turns to move the rod with their right hand, causing the piece of foam to tickle both of their left palms. © Copyright Reed Business Information Ltd.
Link ID: 18954 - Posted: 11.21.2013
by Laura Sanders SAN DIEGO — Teenagers’ brains are wired to confront a threat instead of retreating, research presented November 10 at the annual Society for Neuroscience meeting suggests. The results may help explain why criminal activity peaks during adolescence. Kristina Caudle of Weill Cornell Medical College in New York City and colleagues tested the impulse control of 83 people between ages 6 and 29. In the experiment, participants were asked to press a button when a photo of a happy face quickly flashed before them. They were told not to press the button when a face had a threatening expression. When confronted with the threatening faces, people between the ages of 13 and 17 were more likely to impulsively push the button than children and adults were, the team found. Brain scans revealed that activity in an area called the orbital frontal cortex peaked in teens when they successfully avoided pushing the button, suggesting that this region curbs the impulse to react, Caudle said. It’s not clear why children don’t have the same impulsive reaction to threatening faces. More studies could determine how the relevant brain systems grow and change, Caudle said. © Society for Science & the Public 2000 - 2013.
SAN DIEGO, CALIFORNIA—Why do teens—especially adolescent males—commit crimes more frequently than adults? One explanation may be that as a group, teenagers react more impulsively to threatening situations than do children or adults, likely because their brains have to work harder to reign in their behavior, a research team reported here yesterday at the Society for Neuroscience meeting. Whether it's driving too fast on a slick road or experimenting with drugs, teenagers have a reputation for courting danger that is often attributed to immaturity or poor decision-making. If immaturity or lack of judgment were the only problem, however, one would expect that children, whose brains are at an even earlier stage of development, would have an equal or greater penchant for risk-taking, says Kristina Caudle, a neuroscientist at the Weill Cornell Medical College in New York City who led the study. But younger children tend to be more cautious than teenagers, suggesting that there is something unique about adolescent brain development that lures them to danger, she says. It's hard to generalize about teenage impulsivity, because some adolescents clearly have more self-control than many adults, says principal investigator B. J. Casey, a neuroscientist. Still, a growing body of evidence suggests that, in general, teens specifically struggle to keep their cool in social situations, she says. Because many crimes committed during adolescence involve emotionally fraught social situations, such as conflict, Caudle and colleagues decided to test whether teens perform badly on a common impulsivity task when faced with social cues of threat. They recruited 83 people, ranging in age from 6 to 29, to perform a simple "Go/No-Go" task, in which they watched a series of faces making neutral or threatening facial expressions flicker past on a computer screen. Each time the participants saw a neutral face, they were instructed to hit a button. They were also told to hold back from pressing the button when they saw a threatening face. As the participants performed the task, the researchers monitored their brain activity with functional magnetic resonance imaging. © 2013 American Association for the Advancement of Science.
By Victoria Stern A trolley is hurtling down a track, and if nobody intervenes it will hit and kill five people. Psychologists use variations on this hypothetical situation to gauge people's gut reactions about morality. Here are three scenarios: The driver could switch the train to another track, on which one man stands. Should the driver reroute the trolley? Now suppose the trolley is driverless and you are a bystander. Should you hit a switch to divert the trolley so it hits the lone man? You are standing above the tracks on a bridge. You could stop the trolley and save the five people by pushing a large man to his death in front of the trolley. Would you push him? Most people say that the driver should reroute the train and that they would reroute the train with the switch but that they would not push the man to his death. This typical decision is associated with increased activity in the medial prefrontal cortex (green), which indicates a strong negative emotional reaction, as well as activity in the amygdala (red), which is involved in processing emotions and stressful events. © 2013 Scientific American
By Daisy Grewal How good are you at multi-tasking? The way you answer that question may tell you more than you think. According to recent research, the better people think they are at multitasking, the worse they actually are at it. And the more that you think you are good at it, the more likely you are to multi-task when driving. Maybe the problem of distracted driving has less to do with the widespread use of smartphones and more to do with our inability to recognize our own limits. A study by David Sanbonmatsu and his colleagues looked at the relationship between people’s beliefs about their own multi-tasking ability and their likelihood of using a cell phone when driving. Importantly, the study also measured people’s actual multi-tasking abilities. The researchers found that people who thought they were good at multi-tasking were actually the worst at it. They were also the most likely to report frequently using their cell phones when driving. This may help explain why warning people about the dangers of cell phone use when driving hasn’t done much to curb the behavior. The study is another reminder that we are surprisingly poor judges of our own abilities. Research has found that people overestimate their own qualities in a number of areas including intelligence, physical health, and popularity. Furthermore, the worse we are at something, the more likely we may be to judge ourselves as competent at it. Psychologists David Dunning and Justin Kruger have studied how incompetence, ironically, is often the result of not being able to accurately judge one’s own incompetence. In one study, they found that people who scored the lowest on tests of grammar and logic were the most likely to overestimate their own abilities. The reverse was also true: the more competent people were most likely to underestimate their abilities. And multi-tasking may be just yet another area where incompetence breeds over-confidence. © 2013 Scientific American
Link ID: 18880 - Posted: 11.06.2013