Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Erika Hayasaki For 40 years, Joel Dreyer was a respected psychiatrist who oversaw a clinic for troubled children, belonged to an exclusive country club, and doted on his four daughters and nine grandchildren. Then, suddenly, he became a major drug dealer. Why? In the 1980s, psychiatrist Joel Dreyer was a fixture on Detroit’s WXYZ Channel 7. His commercials promoting his treatment center, InnerVisions, which he named after the Stevie Wonder album, sometimes ran up to five times a day. In one ad, Dreyer blocks a bartender from serving a mug of beer to a patron and says, “Don’t let your marriage or your job suffer from alcohol or drugs.” In another, Dreyer, in a navy pinstriped suit with a white pocket square, looks into the camera, his expression concerned and sympathetic. “Don’t you want to talk to someone who will listen?” he asks. “Someone who won’t pass judgment? Someone who cares? Come talk to me.” InnerVisions, which was based in Southfield, a suburb northwest of Detroit, had a staff of 80 physicians, psychologists, and therapists and took up two floors of a high-rise. It had made Dreyer not only a public figure but also wealthy. He maintained a side career as an expert witness. Attorneys called on him because he was smart, charming, and persuasive. Dreyer mostly testified for the defense, and with each high-profile case, his celebrity grew. Between the clinic, trial work, and his private practice, he was earning as much as $450,000 a year. Dreyer loved to be the center of attention. He would sometimes ride to work on a motorcycle in a bejeweled Elvis outfit to entertain his colleagues.
Archy de Berker and Sven Bestmann A great deal of excitement has been generated in recent weeks by a review paper examining the literature on the drug modafinil, which concluded that “modafinil may well deserve the title of the first well-validated pharmaceutical ‘nootropic’ [cognitive enhancing] agent”. Coverage in the Guardian, Telegraph, British Medical Journal, and the Independent all called attention to the work, with a press release from Oxford University trumpeting “Review of ‘smart drug’ shows modafinil does enhance cognition”. The paper in question is a well-written summary of the recent literature (although though it probably underestimates side effects, as pointed out in the British Medical Journal). A deeper problem is that reviews do not “show” anything. Reviews can be educational and informative, but that’s not the same as using all of the available data to test whether something works or not. Two different scientists can write reviews on the same topic and come to completely different conclusions. You can think of reviews as a watercolour painting of current knowledge. We sometimes forget that this is a far cry from a technical drawing, each element measured, quantified, and bearing a strict resemblance to reality. Scientists, and the public, trying to figure out what works face a tricky problem: there will often be many papers on a given topic, offering a variety of sometimes conflicting conclusions. Fortunately, we have a well-developed toolkit for assessing the state of the current literature and drawing conclusions from it. This procedure is called meta-analysis; it combines the available sources of data (e.g., published studies), and is extensively used to assess the efficacy of medical interventions. Initiatives such as the Cochrane Collaboration use meta-analyses to synthesize available evidence into a consensus on what works and what doesn’t. © 2015 Guardian News and Media Limited
By Kelli Whitlock Burton They say beauty is in the eye of the beholder. But whether the beholder’s opinion is a product of one's genes or one's environment has long been a question for scientists. Although some research suggests that a preference for certain physical traits, such as height or muscular build, may be encoded in our genes, a new study finds it’s our individual life experiences that lead us to find one face more attractive than another. To get some closure on the nature versus nurture debate in human aesthetics, researchers asked 547 pairs of identical twins and 214 pairs of same-gender fraternal twins to view 200 faces and rate them on a scale of one to seven, with one being the least attractive and seven the most attractive. A group of 660 nontwins then completed the same survey. If genes were more involved in facial preference, identical twins would have had similar ratings; if the influence of a familial environment carried more weight, fraternal twins would have also answered similarly. However, most twins’ scores were quite different from one another, suggesting that something else was at play. The researchers suspect that it’s an individual’s life experiences that guide our opinions of attractiveness. The findings, reported today in Current Biology, build on earlier work by the same team that shows the ability to recognize faces is largely a genetic trait. The research is ongoing, and you can participate, too. Just complete the facial preference survey through the researchers’ website at: www.TestMyBrain.org. © 2015 American Association for the Advancement of Science.
Are you good at picking someone out of a crowd? Most of us are better at recognising faces than distinguishing between other similar objects, so it’s long been suspected there’s something mysterious about the way the brain processes a face. Now further evidence has emerged that this is a special, highly evolved skill. A study of twins suggests there are genes influencing face recognition abilities that are distinct from the ones affecting intelligence – so it’s not that people who are good with faces just have a better memory, for instance. “The idea is that telling friend from foe was so important to survival that there was very strong pressure to improve that trait,” says Nicholas Shakeshaft of King’s College London. Previous studies using brain scanning have suggested there is a part of the brain dedicated to recognising faces, called the fusiform face area. But others have suggested this region may in fact just be used for discriminating between any familiar objects. Wondering if genetics could shed any light, Shakeshaft’s team tested more than 900 sets of UK twins – including both identical and non-identical pairs – on their face recognition skills. The ability turned out to be highly heritable, with identical twins having more similar abilities than fraternal ones. The same went for intelligence, which had earlier been tested as part of a long-running study. © Copyright Reed Business Information Ltd.
By Judith Berck The 73-year-old widow came to see Dr. David Goodman, an assistant professor in the psychiatry and behavioral sciences department at Johns Hopkins School of Medicine, after her daughter had urged her to “see somebody” for her increasing forgetfulness. She was often losing her pocketbook and keys and had trouble following conversations, and 15 minutes later couldn’t remember much of what was said. But he did not think she had early Alzheimer’s disease. The woman’s daughter and granddaughter had both been given a diagnosis of A.D.H.D. a few years earlier, and Dr. Goodman, who is also the director of a private adult A.D.H.D. clinical and research center outside of Baltimore, asked about her school days as a teenager. “She told me: ‘I would doodle because I couldn’t pay attention to the teacher, and I wouldn’t know what was going on. The teacher would move me to the front of the class,’ ” Dr. Goodman said, After interviewing her extensively, noting the presence of patterns of impairment that spanned the decades, Dr. Goodman diagnosed A.D.H.D. He prescribed Vyvanse, a short-acting stimulant of the central nervous system. A few weeks later, the difference was remarkable. “She said: ‘I’m surprised, because I’m not misplacing my keys now, and I can remember things better. My mind isn’t wandering off, and I can stay in a conversation. I can do something until I finish it,’ ” Dr. Goodman said. Once seen as a disorder affecting mainly children and young adults, attention deficit hyperactivity disorder is increasingly understood to last throughout one’s lifetime. © 2015 The New York Times Company
HOW would you punish a murderer? Your answer will depend on how active a certain part of your brain happens to be. Joshua Buckholtz at the University of Harvard and his colleagues gave 66 volunteers scenarios involving a fictitious criminal called John. Some of his crimes were planned. In others, he was experiencing psychosis or distress – for example, his daughter’s life under threat. The volunteers had to decide how responsible John was for each crime and the severity of his punishment on a scale of 0 to 9. Before hearing the stories, some of the volunteers received magnetic stimulation to a brain region involved in decision-making, called the dorsolateral prefrontal cortex (DLPFC), which dampened its activity. The others were given a sham treatment. Inhibiting the DLPFC didn’t affect how responsible the volunteers thought John was for the crimes, or the punishment he should receive when he was not culpable for his actions. But they meted out a much less severe punishment than the control group when John had planned his crime (Neuron, doi.org/7rh). “By altering one process in the brain, we can alter our judgements,” says Christian Ruff at the Swiss Federal Institute of Technology in Zurich. In the justice system, the judgment stage to determine guilt is separated from sentencing, says James Tabery at the University of Utah. “It turns out that our brains work in a similar fashion.” © Copyright Reed Business Information Ltd.
By Martin Enserink AMSTERDAM—Is being a woman a disadvantage when you're applying for grant money in the Netherlands? Yes, say the authors of a paper published by the Proceedings of the National Academy of Sciences (PNAS) this week. The study showed that women have a lower chance than men of winning early career grants from the Netherlands Organization for Scientific Research (NWO), the country's main grant agency. NWO, which commissioned the study, accepted the results and announced several changes on Monday to rectify the problem. "NWO will devote more explicit attention to the gender awareness of reviewers in its methods and procedures," a statement said. But several Dutch scientists who have taken a close look at the data say they see no evidence of sexism. The PNAS paper, written by Romy van der Lee and Naomi Ellemers of Leiden University's Institute of Psychology, is an example of a classic statistical trap, says statistician Casper Albers of the University of Groningen, who tore the paper apart in a blog post yesterday. (In Dutch; a shortened translation in English is here.) Albers says he plans to send the piece as a commentary to PNAS as well. Van der Lee and Ellemers analyzed 2823 applications for NWO's Veni grants for young researchers in the years 2010, 2011, and 2012. Overall, women had a success rate of 14.9%, compared with 17.7% for men, they wrote, and that difference was statistically significant. But Albers says the difference evaporates if you look more closely at sex ratios and success rates in NWO's nine scientific disciplines. Those data, which Van der Lee and Ellemers provided in a supplement to their paper, show that women simply apply more often in fields where the chance of success is low. © 2015 American Association for the Advancement of Science
By Simon Makin Most people associate the term “subliminal conditioning” with dystopian sci-fi tales, but a recent study has used the technique to alter responses to pain. The findings suggest that information that does not register consciously teaches our brain more than scientists previously suspected. The results also offer a novel way to think about the placebo effect. Our perception of pain can depend on expectations, which explains placebo pain relief—and placebo's evil twin, the nocebo effect (if we think something will really hurt, it can hurt more than it should). Researchers have studied these expectation effects using conditioning techniques: they train people to associate specific stimuli, such as certain images, with different levels of pain. The subjects' perception of pain can then be reduced or increased by seeing the images during something painful. Most researchers assumed these pain-modifying effects required conscious expectations, but the new study, from a team at Harvard Medical School and the Karolinska Institute in Stockholm, led by Karin Jensen, shows that even subliminal input can modify pain—a more cognitively complex process than most that have previously been discovered to be susceptible to subliminal effects (timeline below). The scientists conditioned 47 people to associate two faces with either high or low pain levels from heat applied to their forearm. Some participants saw the faces normally, whereas others were exposed subliminally—the images were flashed so briefly, the participants were not aware of seeing them, as verified by recognition tests. © 2015 Scientific American
William Sutcliffe Most epidemics are the result of a contagious disease. ADHD – Attention Deficit Hyperactivity Disorder – is not contagious, and it may not even be a genuine malady, but it has acquired the characteristics of an epidemic. New data has revealed that UK prescriptions for Ritalin and other similar ADHD medications have more than doubled in the last decade, from 359,100 in 2004 to 922,200 last year. In America, the disorder is now the second most frequent long-term diagnosis made in children, narrowly trailing asthma. It generates pharmaceutical sales worth $9bn (£5.7bn) per year. Yet clinical proof of ADHD as a genuine illness has never been found. Sami Timimi, consultant child psychiatrist at Lincolnshire NHS Trust and visiting professor of child psychiatry, is a vocal critic of the Ritalin-friendly orthodoxy within the NHS. While he is at pains to stress that he is “not saying those who have the diagnosis don’t have any problem”, he is adamant that “there is no robust evidence to demonstrate that what we call ADHD correlates with any known biological or neurological abnormality”. The hyperactivity, inattentiveness and lack of impulse control that are at the heart of an ADHD diagnosis are, according to Timimi, simply “a collection of behaviours”. Any psychiatrist who claims that a behaviour is being caused by ADHD is perpetrating a “philosophical tautology” – he is doing nothing more than telling you that hyperactivity is caused by an alternative name for hyperactivity. There is still no diagnostic test – no marker in the body – that can identify a person with ADHD. The results of more than 40 brain scan studies are described by Timimi as “consistently inconsistent”. No conclusive pattern in brain activity had been found to explain or identify ADHD. © independent.co.uk
Link ID: 21425 - Posted: 09.21.2015
By AMY HARMON Some neuroscientists believe it may be possible, within a century or so, for our minds to continue to function after death — in a computer or some other kind of simulation. Others say it’s theoretically impossible, or impossibly far off in the future. A lot of pieces have to fall into place before we can even begin to start thinking about testing the idea. But new high-tech efforts to understand the brain are also generating methods that make those pieces seem, if not exactly imminent, then at least a bit more plausible. Here’s a look at how close, and far, we are to some requirements for this version of “mind uploading.” The hope of mind uploading rests on the premise that much of the key information about who we are is stored in the unique pattern of connections between our neurons, the cells that carry electrical and chemical signals through living brains. You wouldn't know it from the outside, but there are more of those connections — individually called synapses, collectively known as the connectome — in a cubic centimeter of the human brain than there are stars in the Milky Way galaxy. The basic blueprint is dictated by our genes, but everything we do and experience alters it, creating a physical record of all the things that make us US — our habits, tastes, memories, and so on. It is exceedingly tricky to transition that pattern of connections, known as the connectome, into a state where it is both safe from decay and can be verified as intact. But in recent months, two sets of scientists said they had devised separate ways to do that for the brains of smaller mammals. If either is scaled up to work for human brains — still a big if — then theoretically your brain could sit on a shelf or in a freezer for centuries while scientists work on the rest of these steps. © 2015 The New York Times Company
Neel V. Patel The concept of the insanity defense dates back to ancient Greece and the Roman Empire. The idea has always been the same: Protect individuals from being held accountable for behavior they couldn’t control. Yet there have been more than a few historical and recent instances of a judge or jury issuing a controversial “by reason of…” verdict. What was intended as a human rights effort has become a last-ditch way to save killers (though it didn’t work for James Holmes). The question that hangs in the air at these sort of proceedings has always been the same: Is there a way to make determinations more scientific and less traditionally judicial? Adam Shniderman, a criminal justice researcher at Texas Christian University, has been studying the role of neuroscience in the court system for several years now. He explains that neurological data and explanations don’t easily translate into the world of lawyers and legal text. Inverse spoke with Shniderman to learn more about how neuroscience is used in today’s insanity defenses, and whether this is likely to change as the technology used to observe the brain gets better and better. Can you give me a quick overview of how the role of neuroscience in the courts, has changed over the years? Especially in the last few decades with new advances in technology. Obviously, [neuroscientific evidence] has become more widely used as brain-scanning technology has gotten better. Some of the scanning technology we use now, like functional MRI that measures blood oxygenation as a proxy for neurological activity, is relatively new within the last 20 years or so. The nature of brain scanning has changed, but the knowledge that the brain influences someone’s actions is not new.
By Steve Mirsky It's nice to know that the great man we celebrate in this special issue had a warm sense of humor. For example, in 1943 Albert Einstein received a letter from a junior high school student who mentioned that her math class was challenging. He wrote back, “Do not worry about your difficulties in mathematics; I can assure you that mine are still greater.” Today we know that his sentiment could also have been directed at crows, which are better at math than those members of various congressional committees that deal with science who refuse to acknowledge that global temperatures keep getting higher. Studies show that crows can easily discriminate between a group of, say, three objects and another containing nine. They have more trouble telling apart groups that are almost the same size, but unlike the aforementioned committee members, at least they're trying. A study in the Proceedings of the National Academy of Sciences USA finds that the brain of a crow has nerve cells that specialize in determining numbers—a method quite similar to what goes on in our primate brain. Human and crow brains are substantially different in size and organization, but convergent evolution seems to have decided that this kind of neuron-controlled numeracy is a good system. (Crows are probably unaware of evolution, which is excusable. Some members of various congressional committees that deal with science pad their reactionary résumés by not accepting evolution, which is astonishing.) © 2015 Scientific American
Mo Costandi In an infamous set of experiments performed in the 1960s, psychologist Walter Mischel sat pre-school kids at a table, one by one, and placed a sweet treat – a small marshmallow, a biscuit, or a pretzel – in front of them. Each of the young participants was told that they would be left alone in the room, and that if they could resist the temptation to eat the sweet on the table in front of them, they would be rewarded with more sweets when the experimenter returned. The so-called Marshmallow Test was designed to test self-control and delayed gratification. Mischel and his colleagues tracked some of the children as they grew up, and then claimed that those who managed to hold out for longer in the original experiment performed better at school, and went on to become more successful in life, than those who couldn’t resist the temptation to eat the treat before the researcher returned to the room. The ability to exercise willpower and inhibit impulsive behaviours is considered to be a core feature of the brain’s executive functions, a set of neural processes - including attention, reasoning, and working memory - which regulate our behaviour and thoughts, and enable us to adapt them according to the changing demands of the task at hand. Executive function is a rather vague term, and we still don’t know much about its underlying bran mechanisms, or about how different components of this control system are related to one another. New research shows that self-control and memory share, and compete with each other for, the same brain mechanisms, such that exercising willpower saps these common resources and impairs our ability to encode memories. © 2015 Guardian News and Media Limited
Shankar Vedantam Girls often outperform boys in science and math at an early age but are less likely to choose tough courses in high school. An Israeli experiment demonstrates how biases of teachers affect students. RENEE MONTAGNE, HOST: At early ages, girls often outperform boys in math and science classes. Later, something changes. By the time they get into high school, girls are less likely than boys to take difficult math courses and less likely, again, to go into careers in science, technology, engineering or medicine. To learn more about this, David Greene spoke with NPR social science correspondent Shankar Vedantam. SHANKAR VEDANTAM, BYLINE: Well, the new study suggests, David, that some of these outcomes might be driven by the unconscious biases of elementary school teachers. What's remarkable about the new work is it doesn't just theorize about the gender gap, it actually has very hard evidence. Edith Sand at Tel Aviv University and her colleague, Victor Lavy, analyzed the math test scores of about 3,000 students in Tel Aviv. When the students were in sixth grade, the researchers got two sets of math test scores. One set of scores were given by the classroom teachers, who obviously knew the children whom they were grading. The second set of scores were from external teachers who did not know if the children they were grading were either boys or girls. So the external teachers were blind to the gender of the children. © 2015 NPR
By LISA FELDMAN BARRETT Boston — IS psychology in the midst of a research crisis? An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments and found that over 60 percent of them failed to replicate — that is, their findings did not hold up the second time around. The results, published last week in Science, have generated alarm (and in some cases, confirmed suspicions) that the field of psychology is in poor shape. But the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works. Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate. Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test. A number of years ago, for example, scientists conducted an experiment on fruit flies that appeared to identify the gene responsible for curly wings. The results looked solid in the tidy confines of the lab, but out in the messy reality of nature, where temperatures and humidity varied widely, the gene turned out not to reliably have this effect. In a simplistic sense, the experiment “failed to replicate.” But in a grander sense, as the evolutionary biologist Richard Lewontin has noted, “failures” like this helped teach biologists that a single gene produces different characteristics and behaviors, depending on the context. © 2015 The New York Times Company
Link ID: 21369 - Posted: 09.01.2015
By James Gallagher Health editor, BBC News website Close your eyes and imagine walking along a sandy beach and then gazing over the horizon as the Sun rises. How clear is the image that springs to mind? Most people can readily conjure images inside their head - known as their mind's eye. But this year scientists have described a condition, aphantasia, in which some people are unable to visualise mental images. Niel Kenmuir, from Lancaster, has always had a blind mind's eye. He knew he was different even in childhood. "My stepfather, when I couldn't sleep, told me to count sheep, and he explained what he meant, I tried to do it and I couldn't," he says. "I couldn't see any sheep jumping over fences, there was nothing to count." Our memories are often tied up in images, think back to a wedding or first day at school. As a result, Niel admits, some aspects of his memory are "terrible", but he is very good at remembering facts. And, like others with aphantasia, he struggles to recognise faces. Yet he does not see aphantasia as a disability, but simply a different way of experiencing life. Take the aphantasia test It is impossible to see what someone else is picturing inside their head. Psychologists use the Vividness of Visual Imagery Questionnaire, which asks you to rate different mental images, to test the strength of the mind's eye. The University of Exeter has developed an abridged version that lets you see how your mind compares. © 2015 BBC.
By Elizabeth Kolbert C57BL/6J mice are black, with pink ears and long pink tails. Inbred for the purposes of experimentation, they exhibit a number of infelicitous traits, including a susceptibility to obesity, a taste for morphine, and a tendency to nibble off their cage mates’ hair. They’re also tipplers. Given access to ethanol, C57BL/6J mice routinely suck away until the point that, were they to get behind the wheel of a Stuart Little-size roadster, they’d get pulled over for D.U.I. Not long ago, a team of researchers at Temple University decided to take advantage of C57BL/6Js’ bad habits to test a hunch. They gathered eighty-six mice and placed them in Plexiglas cages, either singly or in groups of three. Then they spiked the water with ethanol and videotaped the results. Half of the test mice were four weeks old, which, in murine terms, qualifies them as adolescents. The other half were twelve-week-old adults. When the researchers watched the videos, they found that the youngsters had, on average, outdrunk their elders. More striking still was the pattern of consumption. Young male C57BL/6Js who were alone drank roughly the same amount as adult males. But adolescent males with cage mates went on a bender; they spent, on average, twice as much time drinking as solo boy mice and about thirty per cent more time than solo girls. The researchers published the results in the journal Developmental Science. In their paper, they noted that it was “not possible” to conduct a similar study on human adolescents, owing to the obvious ethical concerns. But, of course, similar experiments are performed all the time, under far less controlled circumstances. Just ask any college dean. Or ask a teen-ager.
By Simon Worrall, National Geographic How do we know we exist? What is the self? These are some of the questions science writer Anil Ananthaswamy asks in his thought-provoking new book, The Man Who Wasn’t There: Investigations Into the Strange New Science of the Self. The answers, he says, may lie in medical conditions like Cotard’s syndrome, Alzheimer’s or body integrity identity disorder, which causes some people to try and amputate their own limbs. Speaking from Berkeley, California, he explains why Antarctic explorer Ernest Shackleton fell victim to the doppelgänger effect; how neuroscience is rewriting our ideas about identity; and how a song by George Harrison of the Beatles offers a critique of the Western view of the self. You dedicate the book to “those of us who want to let go but wonder, who is letting go and of what?” Explain that statement. We always hear within popular culture that we have to “let go,” as a way of dealing with certain situations in our lives. And in some sense you have to wonder about that statement because the person or thing doing the letting go is also probably what has to be let go. In the book, I am trying to get behind the whole issue of what the self is that has to do the letting go; and what aspects of the self have to be let go of. You start your book with Alzheimer’s. Tell us about the origin of the condition and what it tells us about “the autobiographical self.” Alzheimer’s is a very severe condition, especially during the mid- to late stages, which starts robbing people of their ability to remember anything that’s happening to them. They also start forgetting the people they are close to. © 1996-2015 National Geographic Society
Link ID: 21343 - Posted: 08.27.2015
By NINA STROHMINGER and SHAUN NICHOLS WHEN does the deterioration of your brain rob you of your identity, and when does it not? Alzheimer’s, the neurodegenerative disease that erodes old memories and the ability to form new ones, has a reputation as a ruthless plunderer of selfhood. People with the disease may no longer seem like themselves. Neurodegenerative diseases that target the motor system, like amyotrophic lateral sclerosis, can lead to equally devastating consequences: difficulty moving, walking, speaking and eventually, swallowing and breathing. Yet they do not seem to threaten the fabric of selfhood in quite the same way. Memory, it seems, is central to identity. And indeed, many philosophers and psychologists have supposed as much. This idea is intuitive enough, for what captures our personal trajectory through life better than the vault of our recollections? But maybe this conventional wisdom is wrong. After all, the array of cognitive faculties affected by neurodegenerative diseases is vast: language, emotion, visual processing, personality, intelligence, moral behavior. Perhaps some of these play a role in securing a person’s identity. The challenge in trying in determine what parts of the mind contribute to personal identity is that each neurodegenerative disease can affect many cognitive systems, with the exact constellation of symptoms manifesting differently from one patient to the next. For instance, some Alzheimer’s patients experience only memory loss, whereas others also experience personality change or impaired visual recognition. The only way to tease apart which changes render someone unrecognizable is to compare all such symptoms, across multiple diseases. And that’s just what we did, in a study published this month in Psychological Science. © 2015 The New York Times Company
Link ID: 21331 - Posted: 08.24.2015
Helen Thomson Modafinil is the world’s first safe “smart drug”, researchers at Harvard and Oxford universities have said, after performing a comprehensive review of the drug. They concluded that the drug, which is prescribed for narcolepsy but is increasingly taken without prescription by healthy people, can improve decision- making, problem-solving and possibly even make people think more creatively. While acknowledging that there was limited information available on the effects of long-term use, the reviewers said that the drug appeared safe to take in the short term, with few side effects and no addictive qualities. Modafinil has become increasingly common in universities across Britain and the US. Prescribed in the UK as Provigil, it was licensed in 2002 for use as a treatment for narcolepsy - a brain disorder that can cause a person to suddenly fall asleep at inappropriate times or to experience chronic pervasive sleepiness and fatigue. Used without prescription, and bought through easy-to-find websites, modafinil is what is known as a smart drug - used primarily by people wanting to improve their focus before an exam. A poll of Nature journal readers suggested that one in five have used drugs to improve focus, with 44% stating modafinil as their drug of choice. But despite its increasing popularity, there has been little consensus on the extent of modafinil’s effects in healthy, non-sleep-disordered humans. A new review of 24 of the most recent modafinil studies suggests that the drug has many positive effects in healthy people, including enhancing attention, improving learning and memory and increasing something called “fluid intelligence” - essentially our capacity to solve problems and think creatively. © 2015 Guardian News and Media Limited