Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Ellen Hendriksen Pop quiz: what’s the first thing that comes to mind when I say “ADHD”? a. Getting distracted b. Ants-in-pants c. Elementary school boys d. Women and girls Most likely, you didn’t pick D. If that’s the case, you’re not alone. For most people, ADHD conjures a mental image of school-aged boys squirming at desks or bouncing off walls, not a picture of adults, girls, or especially adult women. Both scientists and society have long pinned ADHD on males, even though girls and women may be just as likely to suffer from this neurodevelopmental disorder. Back in 1987, the American Psychiatric Association stated that the male to female ratio for ADHD was 9 to 1. Twenty years later, however, an epidemiological study of almost 4,000 kids found the ratio was more like 1 to 1—half girls, half boys. © 2017 Scientific American
By Drake Baer Philosophers have been arguing about the nature of will for at least 2,000 years. It’s at the core of blockbuster social-psychology findings, from delayed gratification to ego depletion to grit. But it’s only recently, thanks to the tools of brain imaging, that the act of willing is starting to be captured at a mechanistic level. A primary example is “cognitive control,” or how the brain selects goal-serving behavior from competing processes like so many unruly third-graders with their hands in the air. It’s the rare neuroscience finding that’s immediately applicable to everyday life: By knowing the way the brain is disposed to behaving or misbehaving in accordance to your goals, it’s easier to get the results you’re looking for, whether it’s avoiding the temptation of chocolate cookies or the pull of darkly ruminative thoughts. Jonathan Cohen, who runs a neuroscience lab dedicated to cognitive control at Princeton, says that it underlies just about every other flavor of cognition that’s thought to “make us human,” whether it’s language, problem solving, planning, or reasoning. “If I ask you not to scratch the mosquito bite that you have, you could comply with my request, and that’s remarkable,” he says. Every other species — ape, dog, cat, lizard — will automatically indulge in the scratching of the itch. (Why else would a pup need a post-surgery cone?) It’s plausible that a rat or monkey could be taught not to scratch an itch, he says, but that would probably take thousands of trials. But any psychologically and physically able human has the capacity to do so. “It’s a hardwired reflex that is almost certainly coded genetically,” he says. “But with three words — don’t scratch it — you can override those millions of years of evolution. That’s cognitive control.” © 2017, New York Media LLC.
Link ID: 23067 - Posted: 01.07.2017
By Michael Price As we age, we get progressively better at recognizing and remembering someone’s face, eventually reaching peak proficiency at about 30 years old. A new study suggests that’s because brain tissue in a region dedicated to facial recognition continues to grow and develop throughout childhood and into adulthood, a process known as proliferation. The discovery may help scientists better understand the social evolution of our species, as speedy recollection of faces let our ancestors know at a glance whether to run, woo, or fight. The results are surprising because most scientists have assumed that brain development throughout one’s life depends almost exclusively on “synaptic pruning,” or the weeding out of unnecessary connections between neurons, says Brad Duchaine, a psychologist at Dartmouth College who was not involved with the study. “I expect these findings will lead to much greater interest in the role of proliferation in neural development.” Ten years ago, Kalanit Grill-Spector, a psychologist at Stanford University in Palo Alto, California, first noticed that several parts of the brain’s visual cortex, including a segment known as the fusiform gyrus that’s known to be involved in facial recognition, appeared to develop at different rates after birth. To get more detailed information on how the size of certain brain regions changes over time, she turned to a recently developed brain imaging technology known as quantitative magnetic resonance imaging (qMRI). The technique tracks how long it takes for protons, excited by the imaging machine’s strong magnetic field, to calm down. Like a top spinning on a crowded table, these protons will slow down more quickly if they’re surrounded by a lot of molecules—a proxy for measuring volume. © 2017 American Association for the Advancement of Science
Alexander Fornito, The human brain is an extraordinarily complex network, comprising an estimated 86 billion neurons connected by 100 trillion synapses. A connectome is a comprehensive map of these links—a wiring diagram of the brain. With current technology, it is not possible to map a network of this size at the level of every neuron and synapse. Instead researchers use techniques such as magnetic resonance imaging to map connections between areas of the human brain that span several millimeters and contain many thousands of neurons. At this macroscopic scale, each area comprises a specialized population of neurons that work together to perform particular functions that contribute to cognition. For example, different parts of your visual cortex contain cells that process specific types of information, such as the orientation of a line and the direction in which it moves. Separate brain regions process information from your other senses, such as sound, smell and touch, and other areas control your movements, regulate your emotional responses, and so on. These specialized functions are not processed in isolation but are integrated to provide a unitary and coherent experience of the world. This integration is hypothesized to occur when different populations of cells synchronize their activity. The fiber bundles that connect different parts of the brain—the wires of the connectome—provide the substrate for this communication. These connections ensure that brain activity unfolds through time as a rhythmic symphony rather than a disordered cacophony. © 2017 Scientific American
Keyword: Brain imaging
Link ID: 23049 - Posted: 01.03.2017
By Susana Martinez-Conde Our perceptual and cognitive systems like to keep things simple. We describe the line drawings below as a circle and a square, even though their imagined contours consist—in reality—of discontinuous line segments. The Gestalt psychologists of the 19th and early 20th century branded this perceptual legerdemain as the Principle of Closure, by which we tend to recognize shapes and concepts as complete, even in the face of fragmentary information. Now at the end of the year, it is tempting to seek a cognitive kind of closure: we want to close the lid on 2016, wrap it with a bow and start a fresh new year from a blank slate. Of course, it’s just an illusion, the Principle of Closure in one of its many incarnations. The end of the year is just as arbitrary as the end of the month, or the end of the week, or any other date we choose to highlight in the earth’s recurrent journey around the sun. But it feels quite different. That’s why we have lists of New Year’s resolutions, or why we start new diets or exercise regimes on Mondays rather than Thursdays. Researchers have also found that, even though we measure time in a continuous scale, we assign special meaning to idiosyncratic milestones such as entering a new decade. What should we do about our brain’s oversimplification tendencies concerning the New Year—if anything? One strategy would be to fight our feelings of closure and rebirth as we (in truth) seamlessly move from the last day of 2016 to the first day of 2017. But that approach is likely to fail. Try as we might, the Principle of Closure is just too ingrained in our perceptual and cognitive systems. In fact, if you already have the feeling that the beginning of the year is somewhat special (hey, it only happens once a year!), you might as well decide that resistance is futile, and not just embrace the illusion, but do your best to channel it. © 2017 Scientific American
Perry Link People who study other cultures sometimes note that they benefit twice: first by learning about the other culture and second by realizing that certain assumptions of their own are arbitrary. In reading Colin McGinn’s fine recent piece, “Groping Toward the Mind,” in The New York Review, I was reminded of a question I had pondered in my 2013 book Anatomy of Chinese: whether some of the struggles in Western philosophy over the concept of mind—especially over what kind of “thing” it is—might be rooted in Western language. The puzzles are less puzzling in Chinese. Indo-European languages tend to prefer nouns, even when talking about things for which verbs might seem more appropriate. The English noun inflation, for example, refers to complex processes that were not a “thing” until language made them so. Things like inflation can even become animate, as when we say “we need to combat inflation” or “inflation is killing us at the check-out counter.” Modern cognitive linguists like George Lakoff at Berkeley call inflation an “ontological metaphor.” (The inflation example is Lakoff’s.) When I studied Chinese, though, I began to notice a preference for verbs. Modern Chinese does use ontological metaphors, such as fāzhăn (literally “emit and unfold”) to mean “development” or xὶnxīn (“believe mind”) for “confidence.” But these are modern words that derive from Western languages (mostly via Japanese) and carry a Western flavor with them. “I firmly believe that…” is a natural phrase in Chinese; you can also say “I have a lot of confidence that…” but the use of a noun in such a phrase is a borrowing from the West. © 1963-2016 NYREV, Inc
By Susana Martinez-Conde, Stephen L. Macknik We think we know what we want—but do we, really? In 2005 Lars Hall and Petter Johansson, both at Lund University in Sweden, ran an experiment that transformed how cognitive scientists think about choice. The experimental setup looked deceptively simple. A study participant and researcher faced each other across a table. The scientist offered two photographs of young women deemed equally attractive by an independent focus group. The subject then had to choose which portrait he or she found more appealing. Next, the experimenter turned both pictures over, moved them toward the subjects and asked them to pick up the photo they just chose. Subjects complied, unaware that the researcher had just performed a swap using a sleight-of-hand technique known to conjurers as black art. Because your visual neurons are built to detect and enhance contrast, it is very hard to see black on black: a magician dressed in black against a black velvet backdrop can look like a floating head. Hall and Johansson deliberately used a black tabletop in their experiment. The first photos their subjects saw all had black backs. Behind those, however, they hid a second picture of the opposite face with a red back. When the experimenter placed the first portrait face down on the table, he pushed the second photo toward the subject. When participants picked up the red-backed photos, the black-backed ones stayed hidden against the table's black surface—that is, until the experimenter could surreptitiously sweep them into his lap. © 2016 Scientific American
By Victoria Gill Science reporter, BBC News Direct recordings have revealed what is happening in our brains as we make sense of speech in a noisy room. Focusing on one conversation in a loud, distracting environment is called "the cocktail party effect". It is a common festive phenomenon and of interest to researchers seeking to improve speech recognition technology. Neuroscientists recorded from people's brains during a test that recreated the moment when unintelligible speech suddenly makes sense. A team measured people's brain activity as the words of a previously unintelligible sentence suddenly became clear when a subject was told the meaning of the "garbled speech". The findings are published in the journal Nature Communications. Lead researcher Christopher Holdgraf from the University of California, Berkeley, and his colleagues were able to work with epilepsy patients, who had had a portion of their skull removed and electrodes placed on the brain surface to track their seizures. First, the researchers played a very distorted, garbled sentence to each subject, which almost no-one was able to understand. They then played a normal, easy to understand version of the same sentence and immediately repeated the garbled version. "After hearing the intact sentence" the researchers explained in their paper, all the subjects understood the subsequent "noisy version". The brain recordings showed this moment of recognition as brain activity patterns in the areas of the brain that are known to be associated with processing sound and understanding speech. When the subjects heard the very garbled sentence, the scientists reported that they saw little activity in those parts of the brain. Hearing the clearly understandable sentence then triggered patterns of activity in those brain areas. © 2016 BBC.
A little over a decade ago, neuroscientists began using a new technique to inspect what was going on in the brains of their subjects. Rather than giving their subjects a task to complete and watching their brains to see which parts lit up, they’d tell them to lie back, let their minds wander, and try not to fall asleep for about six minutes. That technique is called resting state functional magnetic resonance imaging, and it shares a problem with other types of fMRI: It only tracks changes in the blood in the brain, not the neurons sending the signals in the first place. Researchers have recently called fMRI into question for its reliance on possibly-faulty statistics. And things get even less certain when the brain isn’t engaged in any particular task. “These signals are, by definition, random,” says Elizabeth Hillman, a biomedical engineer at Columbia’s Zuckerman Institute. “And when you’re trying to measure something that’s random amidst a whole bunch of noise, it becomes very hard to tell what’s actually random and what isn’t.” Six years ago, Hillman, along with many others in the field, was deeply skeptical of resting state fMRI’s ability to measure what it promised to. But this week, in a paper in Proceedings of the National Academy of Sciences, she presents compelling evidence to the contrary: a comprehensive visualization of neural activity throughout the entire brain at rest, and evidence that the blood rushing around in your brain is actually a good indicator of what your neurons are doing. Ever since 1992, when researcher Bharat Biswal first started scanning people who were just sitting around, resting state fMRI has become increasingly popular. Partly, that’s because it’s just way simpler than regular, task-based fMRI.
Answer by Paul King, Director of Data Science, on Quora: There are hundreds of surprising, perspective-shifting insights about the nature of reality that come from neuroscience. Every bizarre neurological syndrome, every visual illusion, and every clever psychological experiment reveals something entirely unexpected about our experience of the world that we take for granted. Here are a few to give a flavor: 1. Perceptual reality is entirely generated by our brain. We hear voices and meaning from air pressure waves. We see colors and objects, yet our brain only receives signals about reflected photons. The objects we perceive are a construct of the brain, which is why optical illusions can fool the brain. Recommended by Forbes 2. We see the world in narrow disjoint fragments. We think we see the whole world, but we are looking through a narrow visual portal onto a small region of space. You have to move your eyes when you read because most of the page is blurry. We don't see this, because as soon as we become curious about part of the world, our eyes move there to fill in the detail before we see it was missing. While our eyes are in motion, we should see a blank blur, but our brain edits this out. 3. Body image is dynamic and flexible. Our brain can be fooled into thinking a rubber arm or a virtual reality hand is actually a part of our body. In one syndrome, people believe one of their limbs does not belong to them. One man thought a cadaver limb had been sewn onto his body as a practical joke by doctors. 4. Our behavior is mostly automatic, even though we think we are controlling it.
Link ID: 22980 - Posted: 12.13.2016
By DANIEL A. YUDKIN and JAY VAN BAVEL During the first presidential debate, Hillary Clinton argued that “implicit bias is a problem for everyone, not just police.” Her comment moved to the forefront of public conversation an issue that scientists have been studying for decades: namely, that even well-meaning people frequently harbor hidden prejudices against members of other racial groups. Studies have shown that these subtle biases are widespread and associated with discrimination in legal, economic and organizational settings. Critics of this notion, however, protest what they see as a character smear — a suggestion that everybody, deep down, is racist. Vice President-elect Mike Pence has said that an “accusation of implicit bias” in cases where a white police officer shoots a black civilian serves to “demean law enforcement.” Writing in National Review, David French claimed that the concept of implicit bias lets people “indict entire communities as bigoted.” But implicit bias is not about bigotry per se. As new research from our laboratory suggests, implicit bias is grounded in a basic human tendency to divide the social world into groups. In other words, what may appear as an example of tacit racism may actually be a manifestation of a broader propensity to think in terms of “us versus them” — a prejudice that can apply, say, to fans of a different sports team. This doesn’t make the effects of implicit bias any less worrisome, but it does mean people should be less defensive about it. Furthermore, our research gives cause for optimism: Implicit bias can be overcome with rational deliberation. In a series of experiments whose results were published in The Journal of Experimental Psychology: General, we set out to determine how severely people would punish someone for stealing. Our interest was in whether a perpetrator’s membership in a particular group would influence the severity of the punishment he or she received. © 2016 The New York Times Company
Children don’t usually have the words to communicate even the darkest of thoughts. As a result, some children aged 5 to 11 take their own lives. It’s a rare and often overlooked phenomenon—and one that scientists are only just beginning to understand. A study published today in the journal Pediatrics reveals that attention deficit disorder (A.D.D.), not depression, may be the most common mental health diagnosis among children who die by suicide. By contrast, the researchers found that two-thirds of the 606 early adolescents studied (aged 12 to 14) had suffered from depression. While the finding isn’t necessarily causal, it does suggest that impulsive behavior might contribute to incidences of child suicide. Alternatively, some of these cases could be attributed to early-onset bipolar disorder, misdiagnosed as A.D.D. or A.D.H.D. Here’s Catherine Saint Louis, reporting for The New York Times: Suicide prevention has focused on identifying children struggling with depression; the new study provides an early hint that this strategy may not help the youngest suicide victims. “Maybe in young children, we need to look at behavioral markers,” said Jeffrey Bridge, the paper’s senior author and an epidemiologist at the Research Institute at Nationwide Children’s Hospital in Columbus, Ohio. Jill Harkavy-Friedman, the vice president of research at the American Foundation for Suicide Prevention, agreed. “Not everybody who is at risk for suicide has depression,” even among adults, said Dr. Harkavy-Friedman, who was not involved in the new research. © 1996-2016 WGBH Educational Foundation
Rosie Mestel The 2016 US election was a powerful reminder that beliefs tend to come in packages: socialized medicine is bad, gun ownership is a fundamental right, and climate change is a myth — or the other way around. Stances that may seem unrelated can cluster because they have become powerful symbols of membership of a group, says Dan Kahan, who teaches law and psychology at Yale Law School in New Haven, Connecticut. And the need to keep believing can further distort people’s perceptions and their evaluation of evidence. Here, Kahan tells Nature about the real-world consequences of group affinity and cognitive bias, and about research that may point to remedies. This interview has been edited for length and clarity. One measure is how individualistic or communitarian people are, and how egalitarian or hierarchical. Hierarchical and individualistic people tend to have confidence in markets and industry: those represent human ingenuity and power. People who are egalitarian and communitarian are suspicious of markets and industry. They see them as responsible for social disparity. It’s natural to see things you consider honourable as good for society, and things that are base, as bad. Such associations will motivate people’s assessment of evidence. Can you give an example? In a study, we showed people data from gun-control experiments and varied the results1. People who were high in numeracy always saw when a study supported their view. If it didn’t support their view, they didn’t notice — or argued their way out of it. © 2016 Macmillan Publishers Limited
By Alison Howell What could once only be imagined in science fiction is now increasingly coming to fruition: Drones can be flown by human brains' thoughts. Pharmaceuticals can help soldiers forget traumatic experiences or produce feelings of trust to encourage confession in interrogation. DARPA-funded research is working on everything from implanting brain chips to "neural dust" in an effort to alleviate the effects of traumatic experience in war. Invisible microwave beams produced by military contractors and tested on U.S. prisoners can produce the sensation of burning at a distance. What all these techniques and technologies have in common is that they're recent neuroscientific breakthroughs propelled by military research within a broader context of rapid neuroscientific development, driven by massive government-funded projects in both America and the European Union. Even while much about the brain remains mysterious, this research has contributed to the rapid and startling development of neuroscientific technology. And while we might marvel at these developments, it is also undeniably true that this state of affairs raises significant ethical questions. What is the proper role – if any – of neuroscience in national defense or war efforts? My research addresses these questions in the broader context of looking at how international relations, and specifically warfare, are shaped by scientific and medical expertise and technology. 2016 © U.S. News & World Report L.P.
Amanda Gefter As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like. Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction. Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
Link ID: 22937 - Posted: 12.01.2016
By Melissa Dahl Considering its origin story, it’s not so surprising that hypnosis and serious medical science have often seemed at odds. The man typically credited with creating hypnosis, albeit in a rather primitive form, is Franz Mesmer, a doctor in 18th-century Vienna. (Mesmer, mesmerize. Get it?) Mesmer developed a general theory of disease he called “animal magnetism,” which held that every living thing carries within it an internal magnetic force, in liquid form. Illness arises when this fluid becomes blocked, and can be cured if it can be coaxed to flow again, or so Mesmer’s thinking went. To get that fluid flowing, as science journalist Jo Marchant describes in her recent book, Cure, Mesmer “simply waved his hands to direct it through his patients’ bodies” — the origin of those melodramatic hand motions that stage hypnotists use today.” After developing a substantial following — “mesmerism” became “the height of fashion” in late 1780s Paris, writes Marchant — Mesmer became the subject of what was essentially the world’s first clinical trial. King Louis XVI pulled together a team of the world’s top scientists, including Benjamin Franklin, who tested mesmerism and found its capacity to “cure” was, essentially, a placebo effect. “Not a shred of evidence exists for any fluid,” Franklin wrote. “The practice … is the art of increasing the imagination by degrees.” Maybe so. But that doesn’t mean it doesn’t work. © 2016, New York Media LLC.
By Yasemin Saplakoglu Even if you don’t have rhythm, your pupils do. In a new study, neuroscientists played drumming patterns from Western music, including beats typical in pop and rock, while asking volunteers to focus on computer screens for an unrelated fast-paced task that involved pressing the space bar as quickly as possible in response to a signal on the screen. Unbeknownst to the participants, the music omitted strong and weak beats at random times. (You can listen below for an example of a music clip they used. If you listen carefully, you can hear bass and hi-hat beats omitted throughout.) Eye scanners tracked the dilations of the subjects’ pupils as the music played. Their pupils enlarged when the rhythms dropped certain beats, even though the participants weren’t paying attention to the music. The biggest dilations matched the omissions of the beats in the most prominent locations in the music, usually the important first beat in a repeated set of notes. The results suggest that we may have an automatic sense of “hierarchical meter”—a pattern of strong and weak beats—that governs our expectations of music, the researchers write in the February 2017 issue of Brain and Cognition. Perhaps, the authors say, our eyes reveal clues into the importance that music and rhythm plays in our lives. © 2016 American Association for the Advancement of Science
Ian Sample Science editor Scientists have raised hopes for a radical new therapy for phobias and post-traumatic stress disorder (PTSD) with a procedure that can dampen down fears linked to painful memories. The advance holds particular promise for patients because in early tests, researchers found they could reduce anxieties triggered by specific memories without asking people to think about them consciously. That could make it more appealing than exposure therapy, which aims to help patients overcome their phobias by making them confront their fears in a safe environment, for example by encouraging them to handle spiders or snakes in the clinic. The new technique, called fMRI decoded neurofeedback (DecNef), was developed by scientists at the ATR Computational Neuroscience Lab in Japan. Mitsuo Kawato, who worked with researchers in the UK and the US on the latest study, said he wanted to find an alternative to exposure therapy, which has a 40% drop-out rate among PTSD patients. “We always thought this was ambitious, but it worked the way we hoped it would,” said Ben Seymour, a clinical neuroscientist and member of the team at Cambridge University. “We don’t completely erase the fear memory, but it is substantially reduced.” The procedure uses a computer algorithm to analyse a patient’s brain activity in real time and pinpoint moments when their fears can be overwritten by giving them a reward. In the latest study, the reward was a small amount of money. © 2016 Guardian News and Media Limited
By R. Douglas Fields SAN DIEGO—A wireless device that decodes brain waves has enabled a woman paralyzed by locked-in syndrome to communicate from the comfort of her home, researchers announced this week at the annual meeting of the Society for Neuroscience. The 59-year-old patient, who prefers to remain anonymous but goes by the initials HB, is “trapped” inside her own body, with full mental acuity but completely paralyzed by a disease that struck in 2008 and attacked the neurons that make her muscles move. Unable to breathe on her own, a tube in her neck pumps air into her lungs and she requires round-the-clock assistance from caretakers. Thanks to the latest advance in brain–computer interfaces, however, HB has at least regained some ability to communicate. The new wireless device enables her to select letters on a computer screen using her mind alone, spelling out words at a rate of one letter every 56 seconds, to share her thoughts. “This is a significant achievement. Other attempts on such an advanced case have failed,” says neuroscientist Andrew Schwartz of the University of Pittsburgh, who was not involved in the study, published in The New England Journal of Medicine. HB’s mind is intact and the part of her brain that controls her bodily movements operates perfectly, but the signals from her brain no longer reach her muscles because the motor neurons that relay them have been damaged by amyotrophic lateral sclerosis (ALS), says neuroscientist Erick Aarnoutse, who designed the new device and was responsible for the technical aspects of the research. He is part of a team of physicians and scientists led by neuroscientist Nick Ramsey at Utrecht University in the Netherlands. Previously, the only way HB could communicate was via a system that uses an infrared camera to track her eye movements. But the device is awkward to set up and use for someone who cannot move, and it does not function well in many situations, such as in bright sunlight. © 2016 Scientific American,
Laurence O'Dwyer Until as late as 2013 a joint (or comorbid) diagnosis of autism and attention deficit hyperactivity disorder (ADHD) was not permitted by the most influential psychiatric handbook, the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM is an essential tool in psychiatry as it allows clinicians and researchers to use a standard framework for classifying mental disorders. Health insurance companies and drug regulation agencies also use the DSM, so its definition of what does or doesn’t constitute a particular disorder can have far-reaching consequences. One of the reasons for the prohibition of a comorbid diagnosis of autism and ADHD was that the severity of autism placed it above ADHD in the diagnostic hierarchy, so the inattention that is normally present in autism did not seem to merit an additional diagnosis. Nevertheless, that was an odd state of affairs, as any clinician working in the field would be able to quote studies that point to anything from 30% to 80% of patients with autism also having ADHD. More problematic still is the fact that patients with both sets of symptoms may respond poorly to standard ADHD treatments or have increased side effects. The fifth edition of the DSM opened the way for a more detailed look at this overlap, and just a year after the new guidelines were adopted, a consortium (which I am a part of) at the Radboud University in Nijmegen (Netherlands) called NeuroIMAGE published a paper which showed that autistic traits in ADHD participants could be predicted by complex interactions between grey and white matter volumes in the brain. © 2016 Guardian News and Media Limited