Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Christian Jarrett Most of us like to think that we’re independent-minded — we tell ourselves we like Adele’s latest album because it suits our taste, not because millions of other people bought it, or that we vote Democrat because we’re so enlightened, not because all our friends vote that way. The reality, of course, is that humans are swayed in all sorts of different ways — some of them quite subtle — by other people’s beliefs and expectations. Our preferences don’t form in a vacuum, but rather in something of a social pressure-cooker. This has been demonstrated over and over, perhaps most famously in the classic Asch conformity studies from the ‘50s. In those experiments, many participants went along with a blatantly wrong majority judgment about the lengths of different lines — simply, it seems, to fit in. (Although the finding is frequently exaggerated, the basic point about the power of social influence holds true.) But that doesn’t mean all humans are susceptible to peer pressure in the same way. You only have to look at your own friends and family to know that some people always seem to roll with the crowd, while others are much more independent-minded. What accounts for these differences? A new study in Frontiers in Human Neuroscience led by Dr. Juan Dominguez of Monash University in Melbourne, Australia, offers the first hint that part of the answer may come down to certain neural mechanisms. In short, the study suggests that people have a network in their brains that is attuned to disagreement with other people. When this network is activated, it makes us feel uncomfortable (we experience “cognitive dissonance,” to use the psychological jargon) and it’s avoiding this state that motivates us to switch our views as much as possible. It appears the network is more sensitive in some people than in others, and that this might account for varying degrees of pushover-ness. © 2016, New York Media LLC.
By Meeri Kim Teenagers tend to have a bad reputation in our society, and perhaps rightly so. When compared to children or adults, adolescents are more likely to engage in binge drinking, drug use, unprotected sex, criminal activity, and reckless driving. Risk-taking is like second nature to youth of a certain age, leading health experts to cite preventable and self-inflicted causes as the biggest threats to adolescent well-being in industrialized societies. But before going off on a tirade about groups of reckless young hooligans, consider that a recent study may have revealed a silver lining to all that misbehavior. While adolescents will take more risks in the presence of their peers than when alone, it turns out that peers can also encourage them to learn faster and engage in more exploratory acts. A group of 101 late adolescent males were randomly assigned to play the Iowa Gambling Task, a psychological game used to assess decision making, either alone or observed by their peers. The task involves four decks of cards: two are “lucky” decks that will generate long-term gain if the player continues to draw from them, while the other two are “unlucky” decks that have the opposite effect. The player chooses to play or pass cards drawn from one of these decks, eventually catching on to which of the decks are lucky or unlucky — and subsequently only playing from the lucky ones.
By David Z. Hambrick We all make stupid mistakes from time to time. History is replete with examples. Legend has it that the Trojans accepted the Greek’s “gift” of a huge wooden horse, which turned out to be hollow and filled with a crack team of Greek commandos. The Tower of Pisa started to lean even before construction was finished—and is not even the world’s farthest leaning tower. NASA taped over the original recordings of the moon landing, and operatives for Richard Nixon’s re-election committee were caught breaking into a Watergate office, setting in motion the greatest political scandal in U.S. history. More recently, the French government spent $15 billion on a fleet of new trains, only to discover that they were too wide for some 1,300 station platforms. We readily recognize these incidents as stupid mistakes—epic blunders. On a more mundane level, we invest in get-rich-quick schemes, drive too fast, and make posts on social media that we later regret. But what, exactly, drives our perception of these actions as stupid mistakes, as opposed to bad luck? Their seeming mindlessness? The severity of the consequences? The responsibility of the people involved? Science can help us answer these questions. In a study just published in the journal Intelligence, using search terms such as “stupid thing to do”, Balazs Aczel and his colleagues compiled a collection of stories describing stupid mistakes from sources such as The Huffington Post and TMZ. One story described a thief who broke into a house and stole a TV and later returned for the remote; another described burglars who intended to steal cell phones but instead stole GPS tracking devices that were turned on and gave police their exact location. The researchers then had a sample of university students rate each story on the responsibility of the people involved, the influence of the situation, the seriousness of the consequences, and other factors. © 2016 Scientific American,
Link ID: 21928 - Posted: 02.24.2016
Alison Abbott. More than 50 years after a controversial psychologist shocked the world with studies that revealed people’s willingness to harm others on order, a team of cognitive scientists has carried out an updated version of the iconic ‘Milgram experiments’. Their findings may offer some explanation for Stanley Milgram's uncomfortable revelations: when following commands, they say, people genuinely feel less responsibility for their actions — whether they are told to do something evil or benign. “If others can replicate this, then it is giving us a big message,” says neuroethicist Walter Sinnot-Armstrong of Duke University in Durham, North Carolina, who was not involved in the work. “It may be the beginning of an insight into why people can harm others if coerced: they don’t see it as their own action.” The study may feed into a long-running legal debate about the balance of personal responsibility between someone acting under instruction and their instructor, says Patrick Haggard, a cognitive neuroscientist at University College London, who led the work, published on 18 February in Current Biology1. Milgram’s original experiments were motivated by the trial of Nazi Adolf Eichmann, who famously argued that he was ‘just following orders’ when he sent Jews to their deaths. The new findings don’t legitimize harmful actions, Haggard emphasizes, but they do suggest that the ‘only obeying orders’ excuse betrays a deeper truth about how a person feels when acting under command. © 2016 Nature Publishing Group
By BENEDICT CAREY Children with attention-deficit problems improve faster when the first treatment they receive is behavioral — like instruction in basic social skills — than when they start immediately on medication, a new study has found. Beginning with behavioral therapy is also a less expensive option over time, according to a related analysis. Experts said the efficacy of this behavior-first approach, if replicated in larger studies, could change standard medical practice, which favors stimulants like Adderall and Ritalin as first-line treatments, for the more than four million children and adolescents in the United States with a diagnosis of attention deficit hyperactivity disorder, or A.D.H.D. The new research, published in two papers by the Journal of Clinical Child & Adolescent Psychology, found that stimulants were most effective as a supplemental, second-line treatment for those who needed it — and often at doses that were lower than normally prescribed. The study is thought to be the first of its kind in the field to evaluate the effect of altering the types of treatment midcourse — adding a drug to behavior therapy, for example, or vice versa. “We showed that the sequence in which you give treatments makes a big difference in outcomes,” said William E. Pelham of Florida International University, a leader of the study with Susan Murphy of the University of Michigan. “The children who started with behavioral modification were doing significantly better than those who began with medication by the end, no matter what treatment combination they ended up with.” Other experts cautioned that the study tracked behavior but not other abilities that medication can quickly improve, like attention and academic performance, and said that drugs remained the first-line treatment for those core issues. © 2016 The New York Times Company
Link ID: 21909 - Posted: 02.18.2016
Allison Aubrey It's no secret that stimulant medications such as Adderall that are prescribed to treat symptoms of ADHD are sometimes used as "study drugs" aimed at boosting cognitive performance. And emergency room visits linked to misuse of the drug are on the rise, according to a study published Tuesday in the Journal of Clinical Psychiatry. "Young adults in the 18- to 25-year age range are most likely to misuse these drugs," says Dr. Ramin Mojtabai, a professor at the Johns Hopkins Bloomberg School of Public Health and senior author of the study. A common scenario is this: A person who has been prescribed ADHD drugs gives or diverts pills to a friend or family member who may be looking for a mental boost, perhaps to cram for a final or prepare a report. And guess what? This is illegal. Overall, the study found that nonmedical use of Adderall and generic versions of the drug increased by 67 percent among adults between 2006 and 2011. The findings are based on data from the National Survey on Drug Use and Health. The number of emergency room visits involving Adderall misuse increased from 862 visits in 2006 to 1,489 in 2011 according to data from the Drug Abuse Warning Network . © 2016 npr
David H.Wells Take a theory of consciousness that calculates how aware any information-processing network is – be it a computer or a brain. Trouble is, it takes a supercomputer billions of years to verify its predictions. Add a maverick cosmologist, and what do you get? A way to make the theory useful within our lifetime. Integrated information theory (IIT) is one of our best descriptions of consciousness. Developed by neuroscientist Giulio Tononi of the University of Wisconsin at Madison, it’s based on the observation that each moment of awareness is unified. When you contemplate a bunch of flowers, say, it’s impossible to be conscious of the flower’s colour independently of its fragrance because the brain has integrated the sensory data. Tononi argues that for a system to be conscious, it must integrate information in such a way that the whole contains more information than the sum of its parts. The measure of how a system integrates information is called phi. One way of calculating phi involves dividing a system into two and calculating how dependent each part is on the other. One cut would be the “cruellest”, creating two parts that are the least dependent on each other. If the parts of the cruellest cut are completely independent, then phi is zero, and the system is not conscious. The greater their dependency, the greater the value of phi and the greater the degree of consciousness of the system. Finding the cruellest cut, however, is almost impossible for any large network. For the human brain, with its 100 billion neurons, calculating phi like this would take “longer than the age of our universe”, says Max Tegmark, a cosmologist at the Massachusetts Institute of Technology. © Copyright Reed Business Information Ltd.
Link ID: 21903 - Posted: 02.17.2016
By BENEDICT CAREY Over the past few decades, cognitive scientists have found that small alterations in how people study can accelerate and deepen learning, improving retention and comprehension in a range of subjects, including math, science and foreign languages. The findings come almost entirely from controlled laboratory experiments of individual students, but they are reliable enough that software developers, government-backed researchers and various other innovators are racing to bring them to classrooms, boardrooms, academies — every real-world constituency, it seems, except one that could benefit most: people with learning disabilities. Now, two new studies explore the effectiveness of one common cognitive science technique — the so-called testing effect — for people with attention-deficit problems, one of the most commonly diagnosed learning disabilities. The results were mixed. They hint at the promise of outfoxing learning deficits with cognitive science, experts said, but they also point to the difficulties involved. The learning techniques developed by cognitive psychologists seem, in some respects, an easy fit for people with attention deficits: breaking up study time into chunks, mixing related material in a session, varying study environments. Each can produce improvements in retention or comprehension, and taken together capture the more scattered spirit of those with attention deficit hyperactivity disorder, especially children. The testing effect has proved especially reliable for other students, and it is a natural first choice to measure the potential application to A.D.H.D. The principle is straightforward: Once a student is familiar with a topic, testing himself on it deepens the recall of the material more efficiently than restudying. © 2016 The New York Times Company
By Jordana Cepelewicz Seasonal variations play a major role in the animal kingdom—in reproduction, food availability, hibernation, even fur color. Whether this seasonality has such a significant influence on humans, however, is an open question. Its best-known association is with mood—that is, feeling down during the colder months and up in the summer—and, in extreme cases, seasonal depression, a phenomenon known as seasonal affective disorder (SAD). A new study published in this week’s Proceedings of the National Academy of Sciences seeks to delve deeper into how human biology has adapted not only to day/night cycles (circadian rhythms) but to yearly seasonal patterns as well. Scientists have previously found seasonal variation in the levels and concentrations of certain compounds associated with mood (including dopamine and serotonin), conception and even mortality. Now for the first time, using functional MRI, “it’s [been] conclusively shown that cognition and the brain’s means of cognition are seasonal,” says neuroscientist Gilles Vandewalle of the University of Liège in Belgium, the study’s lead researcher. These findings come at a time when some scientists are disputing the links between seasonality and mental health. Originally aiming to investigate the impact of sleep and sleep deprivation on brain function, Vandewalle and his fellow researchers placed 28 participants on a controlled sleep/wake schedule for three weeks before bringing them into the laboratory, where they stayed for 4.5 days. During this time they underwent a cycle of sleep deprivation and recovery in the absence of seasonal cues such as natural light, time information and social interaction. Vandewalle’s team repeated the entire procedure with the same subjects several times throughout the course of nearly a year and a half. © 2016 Scientific American
By Virginia Morell Like fearful humans, horses raise the inner brow of their eyes when threatened or surprised. Altogether their faces can convey 17 emotions (ours express 27), and they readily recognize the expressions on their fellow equines. But can they read our facial cues? To find out, researchers tested 28 horses, including 21 geldings and seven mares, from stables in the United Kingdom. Each horse was led by his/her halter rope to a position in the stable, and then presented with a life-size color photograph of the face of a man. The man was either smiling or frowning angrily. The scientists recorded the animals’ reactions, and measured their heart rates. Other studies have shown that stressed horses’ heart rates fluctuate, and when the horses looked at the angry man, their hearts reached a maximum heart rate more quickly than when they viewed the smiling image. When shown the angry face, 20 of the horses also turned their heads so that they could look at it with their left eye—a response that suggests they understood the expression, the scientists report online today in Biology Letters, because the right hemisphere of the brain is specialized for processing negative emotions. Dogs, too, have this “left-gaze bias” when confronting angry faces. Also, like dogs, the horses showed no such bias, such as moving their heads to look with the right eye, when viewing the happy faces—perhaps because the animals don’t need to respond to nonthreatening cues. But an angry expression carries a warning—the person may be about to strike. The discovery that horses as well as dogs—the only two animals this has been tested in—can read our facial expressions spontaneously and without training suggests one of two things: Either these domesticated species devote a lot of time to learning our facial cues, or the ability is innate and more widespread in the animal kingdom than previously thought. © 2016 American Association for the Advancement of Scienc
By John Bohannon Didn't get your 40 winks last night? Better not get yourself arrested, or you may admit to a crime you didn't commit. False confessions are surprisingly easy to extract from people simply by keeping them awake, according to a new study of sleep deprivation. It puts hard numbers to a problem that criminal law reformers have worried about for decades. The “crime” in question took place in a sleep lab run by Kimberly Fenn at Michigan State University in East Lansing. Together, she and Elizabeth Loftus, a psychologist at the University of California (UC), Irvine, and two of their former Ph.D. students recruited 88 Michigan State students to take part in an experiment. During two separate visits, the students worked at computers solving problems and filling out questionnaires. They were all given a stern warning: Do not press the escape key, because it will erase important study data. After their second session, the subjects were split into two groups. Half of them were forced to stay awake all night under the watch of the researchers. Scrabble, TV shows, and a card game called euchre seemed to do the trick. The rest were allowed to get a full night's sleep. But that also required policing. "We actually had a student leave the study because he wanted to stay awake all night to study for an exam the next day," Fenn says, adding that "I certainly do not advocate this!" The next morning, everyone received a typed statement describing their performance. The statement accused them of hitting the escape key on the first day, even though none of them actually did so—the computers recorded all keystrokes. © 2016 American Association for the Advancement of Science
By Christian Jarrett Back in the 1980s, the American scientist Benjamin Libet made a surprising discovery that appeared to rock the foundations of what it means to be human. He recorded people’s brain waves as they made spontaneous finger movements while looking at a clock, with the participants telling researchers the time at which they decided to waggle their fingers. Libet’s revolutionary finding was that the timing of these conscious decisions was consistently preceded by several hundred milliseconds of background preparatory brain activity (known technically as “the readiness potential”). The implication was that the decision to move was made nonconsciously, and that the subjective feeling of having made this decision is tagged on afterward. In other words, the results implied that free will as we know it is an illusion — after all, how can our conscious decisions be truly free if they come after the brain has already started preparing for them? For years, various research teams have tried to pick holes in Libet’s original research. It’s been pointed out, for example, that it’s pretty tricky for people to accurately report the time that they made their conscious decision. But, until recently, the broad implications of the finding have weathered these criticisms, at least in the eyes of many hard-nosed neuroscientists, and over the last decade or so his basic result has been replicated and built upon with ever more advanced methods such as fMRI and the direct recording of neuronal activity using implanted electrodes. © 2016, New York Media LLC
Link ID: 21859 - Posted: 02.04.2016
By Anna K. Bobak, Sarah Bate For years scientists have studied the biological basis of human speed, and reported that the fastest athletes are short and muscular in build. However, these conclusions were challenged in 2008 when a new athlete, substantially taller than previous world-record holders, was identified as the fastest man in history. Usain Bolt presented the purest expression of human speed on the planet – and raised the possibility that scientists may need to entirely change the way they think about human biometrics. In the same vein, one might ask whether examinations of the brain at its height of efficiency will present new insights into its workings. Although researchers have historically examined people with a very high IQ (i.e. those with more generalised skills), it has become more and more clear that some individuals only perform extraordinarily well on specific cognitive tasks. Among the most interesting of these is facial identity recognition. In fact, the extraordinary skills of these so-called “super-recognisers” do not seem to correlate with IQ or memory for objects, yet they claim to recognise faces which they have only briefly been seen before, or have undergone substantial changes in appearance. For instance, in a recent scientific report from our laboratory (unpublished), one super-recogniser described bumping into a girl from a children’s’ swimming class he coached as a teenager. He recognised her immediately, despite the fact that he’d not seen her for over ten years and she was now an adult. So how can these people change the way that scientists think about the human brain? For many years researchers have generally agreed that faces are “special.” © 2016 Scientific American
Link ID: 21853 - Posted: 02.03.2016
By David Shultz Is my yellow the same as your yellow? Does your pain feel like my pain? The question of whether the human consciousness is subjective or objective is largely philosophical. But the line between consciousness and unconsciousness is a bit easier to measure. In a new study of how anesthetic drugs affect the brain, researchers suggest that our experience of reality is the product of a delicate balance of connectivity between neurons—too much or too little and consciousness slips away. “It’s a very nice study,” says neuroscientist Melanie Boly at the University of Wisconsin, Madison, who was not involved in the work. “The conclusions that they draw are justified.” Previous studies of the brain have revealed the importance of “cortical integration” in maintaining consciousness, meaning that the brain must process and combine multiple inputs from different senses at once. Our experience of an orange, for example, is made up of sight, smell, taste, touch, and the recollection of our previous experiences with the fruit. The brain merges all of these inputs—photons, aromatic molecules, etc.—into our subjective experience of the object in that moment. “There is new meaning created by the interaction of things,” says Enzo Tagliazucchi, a physicist at the Institute for Medical Psychology in Kiel, Germany. Consciousness ascribes meaning to the pattern of photons hitting your retina, thus differentiating you from a digital camera. Although the brain still receives these data when we lose consciousness, no coherent sense of reality can be assembled. © 2016 American Association for the Advancement of Science.
Link ID: 21830 - Posted: 01.27.2016
Timothy Egan This weekend, I’m going to the Mojave Desert, deep into an arid wilderness of a half-million acres, for some stargazing, bouldering and January sunshine on my public lands. I won’t be out of contact. I checked. If Sarah Palin says something stupid on Donald Trump’s behalf — scratch that. When Sarah Palin says something stupid on Donald Trump’s behalf, I’ll get her speaking-in-tongues buffoonery in real time, along with the rest of the nation. The old me would have despised the new me for admitting such a thing. I’ve tried to go on digital diets, fasting from my screens. I was a friend’s guest at a spa in Arizona once and had so much trouble being “mindful” that they nearly kicked me out. Actually, I just wanted to make sure I didn’t miss the Seahawks game, mindful of Seattle’s woeful offensive line. In the information blur of last year, you may have overlooked news of our incredibly shrinking attention span. A survey of Canadian media consumption by Microsoft concluded that the average attention span had fallen to eight seconds, down from 12 in the year 2000. We now have a shorter attention span than goldfish, the study found. Attention span was defined as “the amount of concentrated time on a task without becoming distracted.” I tried to read the entire 54-page report, but well, you know. Still, a quote from Satya Nadella, the chief executive officer of Microsoft, jumped out at me. “The true scarce commodity” of the near future, he said, will be “human attention.” Putting aside Microsoft’s self-interest in promoting quick-flash digital ads with what may be junk science, there seems little doubt that our devices have rewired our brains. We think in McNugget time. The trash flows, unfiltered, along with the relevant stuff, in an eternal stream. And the last hit of dopamine only accelerates the need for another one. © 2016 The New York Times Company
Link ID: 21812 - Posted: 01.23.2016
By Melissa Dahl It’s the fifth inning and the Tampa Bay Rays are beating the Cleveland Indians 6–2 when Cleveland’s relief pitcher Nick Hagadone steps in. Alas, Hagadone does little to turn around the Indians’ luck that day, closing out the long inning with a score of 10–2. Hagadone, apparently frustrated by his own lackluster performance, heads to the clubhouse and, on the way there, punches a door with his left fist — the fist that is, unfortunately, connected to his pitching arm. That momentary impulse would cost him dearly. Hagadone required surgery and eight months’ recovery time — and, to add insult to a literal injury, his team also relegated him to the minor leagues, a move that shrank his annual salary by more than 80 percent. When asked about what could possibly explain an action like this in a usually easy-going guy, the Indians’ team psychologist, Charlie Maher, could only offer variations on this: “He just snapped.” Unless you are also a relief pitcher in the major leagues, you will likely never be in exactly this situation. But how many times have you reacted aggressively, even violently, in a way that felt almost out of your control? You hurl your smartphone across the room, or you unleash a stream of expletives in a manner that would seem to a calmer, rational mind to be disproportionate to the situation at hand. “I just snapped” is how we explain it to ourselves and others, and then we move on. The phrase has become such a cliché that it’s easy to forget that it doesn’t really explain much of anything. What’s behind this impulsive, immediately regrettable behavior? R. Douglas Fields, a senior investigator at the National Institutes of Health, sought out an explanation in his new book, Why We Snap: Understanding the Rage Circuit in Your Brain, which includes the Hagadone story recounted above. © 2016, New York Media LLC
by Emily Reynolds We know more about what the brain does when it's active than we do when it's at rest. It makes sense -- much neuroscientific research has looked to understand particular (and active) processes. James Kozloski, a researcher at IBM, has investigated what the brain does when it's resting -- what he calls 'the Grand Loop'. "The brain consumes a great amount of energy doing nothing. It's a great mystery of neuroscience," Kozloski told PopSci. He argued that around 90 percent of the energy used by the brain remained "unaccounted for". He believes that the brain is constantly 'looping signals', retracing neural pathways over and over again. It's a "closed loop", according to Kozloski, meaning it isn't reliant on external inputs as much of the brain's activity is. Kozloski tested his theory by running his model through IBM's neural tissue simulator and found that it could potentially account for genetic mutations such as Huntington's. He argued that information created by one mutated gene could, through the 'Grand Loop', affect an entire neural pathway. So what happens when our brain is at work? And how does expending energy affect our neural processes? Much historic research into anxiety has found that people tend to exert more energy or force when they're being watched -- something that leads to slip-ups or mistakes under pressure.
A map for other people’s faces has been discovered in the brain. It could help explain why some of us are better at recognising faces than others. Every part of your body that you can move or feel is represented in the outer layer of your brain. These “maps”, found in the motor and sensory cortices (see diagram, below), tend to preserve the basic spatial layout of the body – neurons that represent our fingers are closer to neurons that represent our arms than our feet, for example. The same goes for other people’s faces, says Linda Henriksson at Aalto University in Helsinki, Finland. Her team scanned 12 people’s brains while they looked at hundreds of images of noses, eyes, mouths and other facial features and recorded which bits of the brain became active. This revealed a region in the occipital face area in which features that are next to each other on a real face are organised together in the brain’s representation of that face. The team have called this map the “faciotopy”. The occipital face area is a region of the brain known to be involved in general facial processing. “Facial recognition is so fundamental to human behaviour that it makes sense that there would be a specialised area of the brain that maps features of the face,” she says. © Copyright Reed Business Information Ltd.
It seems like the ultimate insult, but getting people with brain injuries to do maths may lead to better diagnoses. A trial of the approach has found two people in an apparent vegetative state that may be conscious but “locked-in”. People who are in a vegetative state are awake but have lost all cognitive function. Occasionally, people diagnosed as being in this state are actually minimally conscious with fleeting periods of awareness, or even locked-in. This occurs when they are totally aware but unable to move any part of their body. It can be very difficult to distinguish between each state, which is why a team of researchers in China have devised a brain-computer interface that tests whether people with brain injuries can perform mental arithmetic – a clear sign of conscious awareness. The team, led by Yuanqing Li at South China University of Technology and Jiahui Pan at the South China Normal University in Guangzhou showed 11 people with various diagnoses a maths problem on a screen. This was followed by two possible answers flickering at frequencies designed to evoke different patterns of brain activity. Frames around each number also flashed several times. The participants were asked to focus on the correct answer and count the number of times its frame flashed. The brain patterns from the flickering answers together with the detection of another kind of brain signal that occurs when someone counts, enabled a computer to tell which answer, if any, the person was focusing on. © Copyright Reed Business Information Ltd.
Link ID: 21801 - Posted: 01.19.2016
Patricia Neighmond When Cathy Fields was in her late 50s, she noticed she was having trouble following conversations with friends. "I could sense something was wrong with me," she says. "I couldn't focus. I could not follow." Fields was worried she had suffered a stroke or was showing signs of early dementia. Instead she found out she had attention deficit hyperactivity disorder or ADHD. Fields is now 66 years old and lives in Ponte Vedra Beach, Fla. She's a former secretary and mother of two grown children. Fields was diagnosed with ADHD about eight years ago. Her doctor ruled out any physical problems and suggested she see a psychiatrist. She went to Dr. David Goodman at Johns Hopkins School of Medicine, who by chance specializes in ADHD. Goodman asked Fields a number of questions about focus, attention and completing tasks. He asked her about her childhood and how she did in school. Since ADHD begins in childhood, it's important for mental health professionals to understand these childhood experiences in order to make an accurate diagnosis of ADHD in adulthood. Online screening tests are available, too, so you can try it yourself. Goodman decided that Fields most definitely had ADHD. She's not alone. Goodman says he's seeing more and more adults over the age of 50 newly diagnosed with ADHD. © 2016 npr
Link ID: 21795 - Posted: 01.18.2016