Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By MAGGIE KOERTH-BAKER More than a decade ago, a 43-year-old woman went to a surgeon for a hysterectomy. She was put under, and everything seemed to be going according to plan, until, for a horrible interval, her anesthesia stopped working. She couldn’t open her eyes or move her fingers. She tried to breathe, but even that most basic reflex didn’t seem to work; a tube was lodged in her throat. She was awake and aware on the operating table, but frozen and unable to tell anyone what was happening. Studies of anesthesia awareness are full of such horror stories, because administering anesthesia is a tightrope walk. Too much can kill. But too little can leave a patient aware of the procedure and unable to communicate that awareness. For every 1,000 people who undergo general anesthesia, there will be one or two who are not as unconscious as they seem — people who remember their doctors talking, and who are aware of the surgeon’s knife, even while their bodies remain catatonic and passive. For the unlucky 0.13 percent for whom anesthesia goes awry, there’s not really a good preventive. That’s because successful anesthetization requires complete unconsciousness, and consciousness isn’t something we can measure. There are tools that anesthesiologists use to get a pretty good idea of how well their drugs are working, but these systems are imperfect. For most patients receiving inhaled anesthesia, they’re no better at spotting awareness than dosing metrics developed half a century ago, says George Mashour, a professor of anesthesiology at the University of Michigan Medical School. There are two intertwined mysteries at work, Mashour told me: First, we don’t totally understand how anesthetics work, at least not on a neurological basis. Second, we really don’t understand consciousness — how the brain creates it, or even what, exactly, it is. © 2013 The New York Times Company
By Graham Lawton Patricia Churchland, a neurophilosopher at the University of California at San Diego, says our hopes, loves and very existence are just elaborate functions of a complicated mass of grey tissue. Accepting that can be hard, but what we know should inspire us, not scare us. Her most recent book is Touching a Nerve: The Self as Brain. Graham Lawton: You compare revelations in neuroscience with the discoveries that the Earth goes around the sun and that the heart is a pump. What do you think these ideas have in common? Patricia Churchland: They challenge a whole framework of assumptions about the way things are. For Christians, it was very important that the Earth was at the center of the universe. Similarly, many people believed that the heart was somehow what made us human. And it turned out it was just a pump made of meat. I think the same is true about realizing that when we're conscious, when we make decisions, when we go to sleep, when we get angry, when we're fearful, these are just functions of the physical brain. Coming to terms with the neural basis of who we are can be very unnerving. It has been called "neuroexistentialism," which really captures the essence of it. We're not in the habit of thinking about ourselves that way. GL: Why is it so difficult for us to see the reality of what we actually are? PC: Part of the answer has to do with the evolution of nervous systems. Is there any reason for a brain to know about itself? We can get along without knowing, just as we can get along without knowing that the liver is in there filtering out toxins. The wonderful thing, of course, is that science allows us to know. © 2013 The Slate Group, LLC.
Link ID: 19024 - Posted: 12.11.2013
by Bob Holmes Perseverance in the face of adversity is an admirable character trait – now it turns out you can conjure it up with a quick zap to a tiny spot in the brain. The discovery in two people with epilepsy was accidental but it is the first to show that simple brain stimulation can create rich, complex alterations of consciousness. Josef Parvizi, a neurologist at Stanford University in California, and his colleagues had implanted electrodes in the brains of two people with epilepsy to help identify the source of their seizures. In the course of their work, they noticed that an odd thing happened when they stimulated a region in the anterior midcingulate cortex – a part of the limbic system involved in emotion, processing, learning and memory. Both patients reported feeling a sense of foreboding, coupled with a determination to overcome whatever challenge they were about to face. During the stimulation, one patient reported feeling "worried that something bad is going to happen" but also noted that "it made me stronger". The other said he felt as if he were figuring out how to get through something. He likened it to driving your car when one of the tires bursts. You're only halfway to your destination and you have no option but to keep going forward. "You're like… am I gonna get through this?" he said (see video). He also reported a sense of urgency: "It was more of a positive thing like… push harder, push harder, push harder to try and get through this." One singular sensation In contrast, when the researchers applied a sham stimulation – going through exactly the same procedure, but with the current set to zero – neither volunteer reported feeling any specific sensations. Stimulation of other nearby regions of the brain less than 5 millimetres away also failed to produce the feelings of either foreboding or perseverance. © Copyright Reed Business Information Ltd.
By PAUL BLOOM In 1780, Immanuel Kant wrote that “sexual love makes of the loved person an Object of appetite.” And after that appetite is sated? The loved one, Kant explained, “is cast aside as one casts away a lemon which has been sucked dry.” Many contemporary feminists agree that sexual desire, particularly when elicited by pornographic images, can lead to “objectification.” The objectifier (typically a man) thinks of the target of his desire (typically a woman) as a mere thing, lacking autonomy, individuality and subjective experience. This idea has some laboratory support. Studies have found that viewing people’s bodies, as opposed to their faces, makes us judge those people as less intelligent, less ambitious, less competent and less likable. One neuroimaging experiment found that, for men, viewing pictures of sexualized women induced lowered activity in brain regions associated with thinking about other people’s minds. The objectification thesis also sits well with another idea that many psychologists, including myself, have defended, which is that we are all common-sense dualists. Even if you are a staunch science-minded atheist, in everyday life you still think of people as immaterial conscious beings — we inhabit fleshy bodies, but we are not ourselves physical. To see someone as a body is in opposition to thinking of her as a mind, then, and hence a heightened focus on someone’s body tends to strip away her personhood. But this analysis is too simple. It’s not literally true that women in pornography are thought of as inanimate and unfeeling objects; if they were, then they would just as effectively be depicted as unconscious or unresponsive, as opposed to (as is more often the case) aroused and compliant. Also, as the philosophers Martha Nussbaum and Leslie Green have pointed out, being treated as an object isn’t necessarily a bad thing. Imagine that you are sitting outside on a sunny day, and you move behind someone so that she blocks the sun from your eyes. You have used her as an object, but it’s hard to see that you’ve done something wrong. © 2013 The New York Times Company
by Bethany Brookshire Most people take it as a given that distraction is bad for — oh, hey, a squirrel! Where was I? … Right. Most people take it as a given that distraction is bad for memory. And most of the time, it is. But under certain conditions, the right kind of distraction might actually help you remember. Nathan Cashdollar of University College London and colleagues were looking at the effects of distraction on memory in memory-impaired patients. They were specifically looking at distractions that were totally off-topic from a particular task, and how those distractions affected memory performance. Their results were published November 27 in the Journal of Neuroscience. The researchers worked with a small group of people with severe epilepsy who had lesions in the hippocampus, and therefore had memory problems. They compared them to groups of people with epilepsy without lesions, young healthy people, and older healthy people that were matched to the epilepsy group. Each of the participants went through a memory task called “delayed match-to-sample.” For this task, participants are given a set of samples or pictures, usually things like nature scenes. Then there’s a delay, from one second at the beginning of the test on up to nearly a minute. Then participants are shown another nature scene. Is it one they have seen before? Yes or no? The task starts out simply, with only one nature scene to match, but soon becomes harder, with up to five pictures to remember, and a five-second delay. People with memory impairments did a lot worse when they had more items to remember (called high cognitive load), falling off very steeply in their performance. Normal controls did better, still remaining fairly accurate, but making mistakes once in a while. © Society for Science & the Public 2000 - 2013.
By Emilie Reas Did you make it to work on time this morning? Go ahead and thank the traffic gods, but also take a moment to thank your brain. The brain’s impressively accurate internal clock allows us to detect the passage of time, a skill essential for many critical daily functions. Without the ability to track elapsed time, our morning shower could continue indefinitely. Without that nagging feeling to remind us we’ve been driving too long, we might easily miss our exit. But how does the brain generate this finely tuned mental clock? Neuroscientists believe that we have distinct neural systems for processing different types of time, for example, to maintain a circadian rhythm, to control the timing of fine body movements, and for conscious awareness of time passage. Until recently, most neuroscientists believed that this latter type of temporal processing – the kind that alerts you when you’ve lingered over breakfast for too long – is supported by a single brain system. However, emerging research indicates that the model of a single neural clock might be too simplistic. A new study, recently published in the Journal of Neuroscience by neuroscientists at the University of California, Irvine, reveals that the brain may in fact have a second method for sensing elapsed time. What’s more, the authors propose that this second internal clock not only works in parallel with our primary neural clock, but may even compete with it. Past research suggested that a brain region called the striatum lies at the heart of our central inner clock, working with the brain’s surrounding cortex to integrate temporal information. For example, the striatum becomes active when people pay attention to how much time has passed, and individuals with Parkinson’s Disease, a neurodegenerative disorder that disrupts input to the striatum, have trouble telling time. © 2013 Scientific American
Ed Yong A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced. Psychology has been buffeted in recent years by mounting concern over the reliability of its results, after repeated failures to replicate classic studies. A failure to replicate could mean that the original study was flawed, the new experiment was poorly done or the effect under scrutiny varies between settings or groups of people. To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries. The team chose a mix of effects that represent the diversity of psychological science, from classic experiments that have been repeatedly replicated to contemporary ones that have not. Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested. © 2013 Nature Publishing Group
Link ID: 18974 - Posted: 11.26.2013
By James Gallagher Health and science reporter, BBC News Steroids given to help premature babies develop may also be slightly increasing the risk of mental health disorders, say researchers. The drugs are often given to pregnant mothers at risk of a premature birth to help the baby's lungs prepare for life outside the womb. The study, in the journal PLoS One, showed there was a higher risk of attention disorders at age eight. The charity Bliss said it reinforced the need for regular health checks. Being born too soon can lead to long-term health problems and the earlier the birth the greater the problems. One immediate issue is the baby's lungs being unprepared to breathe air. Steroids can help accelerate lung development. However, the study by researchers at Imperial College London and the University of Oulu in Finland showed the drugs may also be affecting the developing brain. They compared what happened to 37 premature children whose mother was injected with steroids with 185 premature children, of the same weight and gestational age, who were not exposed to the extra dose of steroid. When the children were followed to the age of eight, there was a higher incidence of attention deficit hyperactivity disorder. No difference could be detected at age 16, but this may have been due to the small size of the study. BBC © 2013
By Neuroskeptic I am sitting reading a book. After a while, I get up and make a cup of coffee. I’ve been thinking about this scenario lately as I’ve pondered ‘what remains to be discovered’ in our understanding the brain. By this I mean, what (if anything) prevents neuroscience from at least sketching out an explanation for all of human behaviour? A complete explanation of any given behaviour – such as my reading a particular book – would be impossible, as it would require detailed knowledge of all my brain activity. But neuroscience could sketch an account of some stages of the reading. We have models for how my motor cortex and cerebellum might coordinate my fingers to turn the pages of my book. Other models try to make sense of the recognition of the letters by my visual cortex. This is what I mean by ‘beginning to account for’. We have theories that are not wholly speculative. While we don’t yet have the whole story of motor control or visual perception, we have made a start. Yet I’m not sure that we can even begin to explain: why did I stop what I was doing, get up, and make coffee at that particular time? The puzzle, it seems, does not lie in my actual choice to make some coffee (as opposed to not making it.) We could sketch an explanation for how, once the mental image (memory) of coffee ‘crossed my mind’, that image set off dopamine firing (i.e. I like coffee), and this dopamine, acting on corticostriatal circuits, selected the action of making coffee over the less promising alternatives. But why did that mental image of coffee cross my mind in the first place? And why did it do so just then, not thirty seconds before or afterwards?
Link ID: 18957 - Posted: 11.23.2013
By Gary Stix The emerging academic discipline of neuroethics has been driven, in part, by the recognition that introducing brain scans as legal evidence is fraught with peril. Most neuroscientists think that a brain scan is unable to provide an accurate representation of the state of mind of a defendant or determine whether his frontal lobes predispose to some wanton action. The consensus view holds that studying spots on the wrinkled cerebral cortex that are bigger or smaller in some criminal offenders may hint at overarching insights into the roots of violence, but lack the requisite specificity to be used as evidence in any individual case. “I believe that our behavior is a production of activity in our brain circuits,” Steven E. Hyman of the Broad Institute of Harvard and MIT told a session at the American Association for the Advancement of Science’s annual meeting earlier this year. “But I would never tell a parole board to decide whether to release somebody or hold on to somebody, based on their brain scan as an individual, because I can’t tell what are the causal factors in that individual.” It doesn’t seem to really matter, though, what academic experts believe about the advisability of brain scans as Exhibit One at trial. The entry of neuroscience in the courtroom has already begun, big time. The introduction of a brain scan in a legal case was once enough to generate local headlines. No more. Hundreds of legal opinions each year have begun to invoke the science of mind and brain to bolster legal arguments—references not only to brain scans, but a range of studies that show that the amygdala is implicated in this or the anterior cingulate cortex is at fault for that. The legal establishment, in short, has begun a love affair with all things brain. © 2013 Scientific American
by Anil Ananthaswamy Can you tickle yourself if you are fooled into thinking that someone else is tickling you? A new experiment says no, challenging a widely accepted theory about how our brains work. It is well known that we can't tickle ourselves. In 2000, Sarah-Jayne Blakemore of University College London (UCL) and colleagues came up with a possible explanation. When we intend to move, the brain sends commands to the muscles, but also predicts the sensory consequences of the impending movement. When the prediction matches the actual sensations that arise, the brain dampens down its response to those sensations. This prevents us from tickling ourselves (NeuroReport, DOI: 10.1097/00001756-200008030-00002). Jakob Hohwy of Monash University in Clayton, Australia, and colleagues decided to do a tickle test while simultaneously subjecting people to a body swap illusion. In this illusion, the volunteer and experimenter sat facing each other. The subject wore goggles that displayed the feed from a head-mounted camera. In some cases the camera was mounted on the subject's head, so that they saw things from their own perspective, while in others it was mounted on the experimenter's head, providing the subject with the experimenter's perspective. Using their right hands, both the subject and the experimenter held on to opposite ends of a wooden rod, which had a piece of foam attached to each end. The subject and experimenter placed their left palms against the foam at their end. Next, the subject or the experimenter took turns to move the rod with their right hand, causing the piece of foam to tickle both of their left palms. © Copyright Reed Business Information Ltd.
Link ID: 18954 - Posted: 11.21.2013
by Laura Sanders SAN DIEGO — Teenagers’ brains are wired to confront a threat instead of retreating, research presented November 10 at the annual Society for Neuroscience meeting suggests. The results may help explain why criminal activity peaks during adolescence. Kristina Caudle of Weill Cornell Medical College in New York City and colleagues tested the impulse control of 83 people between ages 6 and 29. In the experiment, participants were asked to press a button when a photo of a happy face quickly flashed before them. They were told not to press the button when a face had a threatening expression. When confronted with the threatening faces, people between the ages of 13 and 17 were more likely to impulsively push the button than children and adults were, the team found. Brain scans revealed that activity in an area called the orbital frontal cortex peaked in teens when they successfully avoided pushing the button, suggesting that this region curbs the impulse to react, Caudle said. It’s not clear why children don’t have the same impulsive reaction to threatening faces. More studies could determine how the relevant brain systems grow and change, Caudle said. © Society for Science & the Public 2000 - 2013.
SAN DIEGO, CALIFORNIA—Why do teens—especially adolescent males—commit crimes more frequently than adults? One explanation may be that as a group, teenagers react more impulsively to threatening situations than do children or adults, likely because their brains have to work harder to reign in their behavior, a research team reported here yesterday at the Society for Neuroscience meeting. Whether it's driving too fast on a slick road or experimenting with drugs, teenagers have a reputation for courting danger that is often attributed to immaturity or poor decision-making. If immaturity or lack of judgment were the only problem, however, one would expect that children, whose brains are at an even earlier stage of development, would have an equal or greater penchant for risk-taking, says Kristina Caudle, a neuroscientist at the Weill Cornell Medical College in New York City who led the study. But younger children tend to be more cautious than teenagers, suggesting that there is something unique about adolescent brain development that lures them to danger, she says. It's hard to generalize about teenage impulsivity, because some adolescents clearly have more self-control than many adults, says principal investigator B. J. Casey, a neuroscientist. Still, a growing body of evidence suggests that, in general, teens specifically struggle to keep their cool in social situations, she says. Because many crimes committed during adolescence involve emotionally fraught social situations, such as conflict, Caudle and colleagues decided to test whether teens perform badly on a common impulsivity task when faced with social cues of threat. They recruited 83 people, ranging in age from 6 to 29, to perform a simple "Go/No-Go" task, in which they watched a series of faces making neutral or threatening facial expressions flicker past on a computer screen. Each time the participants saw a neutral face, they were instructed to hit a button. They were also told to hold back from pressing the button when they saw a threatening face. As the participants performed the task, the researchers monitored their brain activity with functional magnetic resonance imaging. © 2013 American Association for the Advancement of Science.
By Victoria Stern A trolley is hurtling down a track, and if nobody intervenes it will hit and kill five people. Psychologists use variations on this hypothetical situation to gauge people's gut reactions about morality. Here are three scenarios: The driver could switch the train to another track, on which one man stands. Should the driver reroute the trolley? Now suppose the trolley is driverless and you are a bystander. Should you hit a switch to divert the trolley so it hits the lone man? You are standing above the tracks on a bridge. You could stop the trolley and save the five people by pushing a large man to his death in front of the trolley. Would you push him? Most people say that the driver should reroute the train and that they would reroute the train with the switch but that they would not push the man to his death. This typical decision is associated with increased activity in the medial prefrontal cortex (green), which indicates a strong negative emotional reaction, as well as activity in the amygdala (red), which is involved in processing emotions and stressful events. © 2013 Scientific American
By Daisy Grewal How good are you at multi-tasking? The way you answer that question may tell you more than you think. According to recent research, the better people think they are at multitasking, the worse they actually are at it. And the more that you think you are good at it, the more likely you are to multi-task when driving. Maybe the problem of distracted driving has less to do with the widespread use of smartphones and more to do with our inability to recognize our own limits. A study by David Sanbonmatsu and his colleagues looked at the relationship between people’s beliefs about their own multi-tasking ability and their likelihood of using a cell phone when driving. Importantly, the study also measured people’s actual multi-tasking abilities. The researchers found that people who thought they were good at multi-tasking were actually the worst at it. They were also the most likely to report frequently using their cell phones when driving. This may help explain why warning people about the dangers of cell phone use when driving hasn’t done much to curb the behavior. The study is another reminder that we are surprisingly poor judges of our own abilities. Research has found that people overestimate their own qualities in a number of areas including intelligence, physical health, and popularity. Furthermore, the worse we are at something, the more likely we may be to judge ourselves as competent at it. Psychologists David Dunning and Justin Kruger have studied how incompetence, ironically, is often the result of not being able to accurately judge one’s own incompetence. In one study, they found that people who scored the lowest on tests of grammar and logic were the most likely to overestimate their own abilities. The reverse was also true: the more competent people were most likely to underestimate their abilities. And multi-tasking may be just yet another area where incompetence breeds over-confidence. © 2013 Scientific American
Link ID: 18880 - Posted: 11.06.2013
By DAN HURLEY This couldn’t possibly be a good idea. On Friday the 13th of September, in an old brick building on 13th Street in Boston’s Charlestown neighborhood, a pair of electrodes was attached to my forehead, one over my brain’s left prefrontal cortex, the other just above my right eye socket. I was about to undergo transcranial direct-current stimulation, or tDCS, an experimental technique for delivering extremely low dose electrical stimulation to the brain. Using less than 1 percent of the electrical energy necessary for electroconvulsive therapy, powered by an ordinary nine-volt battery, tDCS has been shown in hundreds of studies to enhance an astonishing, seemingly implausible variety of intellectual, emotional and movement-related brain functions. And its side effects appear limited to a mild tingling at the site of the electrode, sometimes a slight reddening of the skin, very rarely a headache and certainly no seizures or memory loss. Still, I felt more than a bit apprehensive as I prepared to find out if a little bit of juice could amp up my cognitive reserves and make me, in a word, smarter. With the electrodes in place, J. León Morales-Quezada, senior research associate at Harvard’s Laboratory of Neuromodulation, pressed a button on his computer and I felt . . . absolutely nothing. No pain. No tingling. Not even a little muscle twitching. “Is it on?” I asked. Morales-Quezada assured me it was. For proof, he pointed to a flat-screen on the wall, displaying signals from six electroencephalogram (EEG) monitors also attached to my head. After 10 minutes of charging my brain, he turned on a computerized exercise I was supposed to practice while the current continued flowing. Called an attention-switching task, it’s used by psychologists as a measure of “executive function” or “cognitive control”: the ability to overrule your urges, to ignore distractions and to quickly shift your focus. Young adults generally do better than older people; people with greater overall cognitive abilities generally perform better than those with less. © 2013 The New York Times Company
Why do some people feel as though one of their body parts is not truly part of them and go to crazy lengths to get rid of it? Paul D. McGeochanswers: Certain people hold a deep desire to amputate a healthy limb. They are not psychotic, and they fully realize that what they want is abnormal. Nevertheless, they have felt from childhood that the presence of a specific limb, usually a leg, somehow makes their body “overcomplete.” Ultimately, many will achieve their desired amputation through self-inflicted damage or surgery. During the past few years my work with neuroscientists Vilayanur S. Ramachandran of U.C.S.D. and David Brang of Northwestern University, along with research by neuroscientist Peter Brugger of University Hospital Zurich in Switzerland, has transformed our understanding of this condition. Our findings suggest that a dysfunction of specific brain areas on the right side of the brain, which are involved in generating our body image, may explain the desire. Bizarre disorders of body image have long been known to arise after a stroke or other incident inflicts damage to the right side of the brain, particularly in the parietal lobe. The right posterior parietal cortex seems to combine several incoming streams of information—touch, joint position sense, vision and balance—to form a dynamic body image that changes as we interact with the world around us. In brain scans, we have found this exact part of the right parietal lobe to activate abnormally in individuals desiring limb removal. Because the primary sensory areas of the brain still function normally, sufferers are able to see and feel the limb in question. Yet they do not experience it as part of their body because the right posterior parietal lobe fails to adequately represent it. The mismatch between a person's actual physical body and his or her body image seems to cause ongoing arousal in the sympathetic nervous system, which may intensify the desire to remove the limb. Given that sufferers date these feelings to childhood, the right parietal dysfunction most likely is congenital or arises in early development. © 2013 Scientific American
Link ID: 18869 - Posted: 11.04.2013
By Lindsey Konkel and Environmental Health News Insecticides commonly used in households may be associated with behavior problems in children, according to a new study by researchers in Quebec. The study is one of the first to investigate potential human health effects of pyrethroids, which are used in more than 3,500 commercial products, including flea bombs and roach sprays. The findings raise some questions about the safety of the compounds, which have replaced other insecticides with known risks to children’s brain development. Exposure to pyrethroids, which kill insects by interfering with their nervous systems, is widespread because they are used inside homes and schools, in municipal mosquito control and on farms. In the study, the urine of 779 Canadian children between the ages of 6 and 11 was tested, and their parents answered questions about each child’s behavior. Ninety-seven percent of the children had traces of pyrethroid breakdown products in their urine, and 91 percent had traces of organophosphates, another class of pesticides. A 10-fold increase in urinary levels of one pyrethroid breakdown product, cis-DCCA, was associated with a doubling in the odds of a child scoring high for parent-reported behavioral problems, such as inattention and hyperactivity. Another breakdown product, trans-DCCA, was also associated with more behavior problems, although the association was not statistically significant, meaning the finding could be due to chance. The breakdown product, trans- and cis-DCCA, is specific to certain pyrethroids – namely permethrin, cypermethrin and cyfluthrin. © 2013 Scientific American
By Daisy Yuhas For more than a century researchers have been trying and failing to link perception and intelligence—for instance, do intelligent people see more detail in a scene? Now scientists at the University of Rochester and at Vanderbilt University have demonstrated that high IQ may be best predicted by combining what we perceive and what we cannot. In two studies in the journal Current Biology, researchers asked 67 people to take IQ tests. They then viewed milli-second-long video clips in which black-and-white stripes moved left or right. The split-second films challenged viewers: the stripes moved within a circular frame that could differ in size, varying from the width of a thumb to a fist held at arm's length. After each clip, the viewers guessed whether the bars moved toward the left or right. The investigators discovered that performance on this test was more correlated with IQ than any other sensory-intelligence link ever explored—but the high-IQ participants were not simply scoring better overall. Individuals with high IQ indeed detected movement accurately within the smallest frame—a finding that suggests, perhaps unsurprisingly, that the ability to rapidly process information contributes to intelligence. More intriguing was the fact that subjects who had higher IQ struggled more than other subjects to detect motion in the largest frame. The authors suggest that the brain may perceive large objects as background and subsequently may try to ignore their movements. “Suppressing information is a really important thing that the brain does,” explains University of Rochester neuroscientist Duje Tadin. He explains that the findings underscore how intelligence requires that we think fast but focus selectively, ignoring distractions. © 2013 Scientific American
By Amanda Mascarelli, When my son was in preschool, I did what many parents of excessively energetic and impulsive preschoolers have surely done: I worried whether his behavior might be a sign of attention-deficit hyperactivity disorder (ADHD). Then I sought input from two pediatricians and a family therapist. The experts thought that his behavior was developmentally normal but said it was still too early to tell for sure. They offered some tips on managing his behavior and creating more structure at home. One pediatrician worked with my son on self-calming techniques such as breathing deeply and pushing on pressure points in his hands. He also suggested an herbal supplement, Valerian Super Calm, for him to take with meals and advised us on dietary adjustments such as increasing my son’s intake of fatty acids. Studies have shown that a combination of omega-3 (found in foods such as walnuts, flaxseed and salmon) and omega-6 fatty acids (from food oils such as canola and flax) can reduce hyperactivity and other ADHD symptoms in some children. In the couple of years since trying these techniques, my son has outgrown most of those worrisome behaviors. I had just about written off the possibility of ADHD until a few weeks ago, when his kindergarten teacher mentioned that she was going to keep an eye on him for possible attention issues. Hearing that left me worried and heavy-hearted. Why is it still so hard to diagnose ADHD? And why is there so much emotional baggage associated with treating it? There are no firm numbers for the number of children with ADHD in the United States. The Centers for Disease Control and Prevention estimates that 9 percent of U.S. children ages 5 to 17 had received diagnoses of ADHD as of 2009. © 1996-2013 The Washington Post