Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Scott Barry Kaufman "Just because a diagnosis [of ADHD] can be made does not take away from the great traits we love about Calvin and his imaginary tiger friend, Hobbes. In fact, we actually love Calvin BECAUSE of his ADHD traits. Calvin’s imagination, creativity, energy, lack of attention, and view of the world are the gifts that Mr. Watterson gave to this character." -- The Dragonfly Forest In his 2004 book "Creativity is Forever", Gary Davis reviewed the creativity literature from 1961 to 2003 and identified 22 reoccurring personality traits of creative people. This included 16 "positive" traits (e.g., independent, risk-taking, high energy, curiosity, humor, artistic, emotional) and 6 "negative" traits (e.g., impulsive, hyperactive, argumentative). In her own review of the creativity literature, Bonnie Cramond found that many of these same traits overlap to a substantial degree with behavioral descriptions of Attention Deficit Hyperactive Disorder (ADHD)-- including higher levels of spontaneous idea generation, mind wandering, daydreaming, sensation seeking, energy, and impulsivity. Research since then has supported the notion that people with ADHD characteristics are more likely to reach higher levels of creative thought and achievement than people without these characteristics (see here, here, here, here, here, here, here, here, here, and here). Recent research by Darya Zabelina and colleagues have found that real-life creative achievement is associated with the ability to broaden attention and have a “leaky” mental filter-- something in which people with ADHD excel. © 2016 Scientific American
Link ID: 22166 - Posted: 05.02.2016
By Adam Bear It happens hundreds of times a day: We press snooze on the alarm clock, we pick a shirt out of the closet, we reach for a beer in the fridge. In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? In a classic paper published almost 20 years ago, the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. The feeling itself, however, plays no causal role in producing that behavior. This could sometimes lead us to think we made a choice when we actually didn’t or think we made a different choice than we actually did. But there’s a mystery here. Suppose, as Wegner and Wheatley propose, that we observe ourselves (unconsciously) perform some action, like picking out a box of cereal in the grocery store, and then only afterwards come to infer that we did this intentionally. If this is the true sequence of events, how could we be deceived into believing that we had intentionally made our choice before the consequences of this action were observed? This explanation for how we think of our agency would seem to require supernatural backwards causation, with our experience of conscious will being both a product and an apparent cause of behavior. In a study just published in Psychological Science, Paul Bloom and I explore a radical—but non-magical—solution to this puzzle. © 2016 Scientific America
Link ID: 22158 - Posted: 04.30.2016
Yuki Noguchi Hey! Wake up! Need another cup of coffee? Join the club. Apparently about a third of Americans are sleep-deprived. And their employers are probably paying for it, too, in the form of mistakes, productivity loss, accidents and increased health insurance costs. A recent Robert Wood Johnson Foundation report found a third of Americans get less sleep than the recommended seven hours. Another survey by Accountemps, an accounting services firm, put that number at nearly 75 percent in March. Bill Driscoll, Accountemps' regional president in the greater Boston area, says some sleepy accountants even admitted it caused them to make costly mistakes. "One person deleted a project that took 1,000 hours to put together," Driscoll says. "Another person missed a decimal point on an estimated payment and the client overpaid by $1 million. Oops. William David Brown, a sleep psychologist at the University of Texas Southwestern Medical School and author of Sleeping Your Way To The Top, says Americans are sacrificing more and more sleep every year. Fatigue is cumulative, he says, and missing the equivalent of one night's sleep is like having a blood alcohol concentration of about .1 — above the legal limit to drive. "About a third of your employees in any big company are coming to work with an equivalent impairment level of being intoxicated," Brown says. © 2016 npr
By Matthew A. Scult My heart pounds as I sprint to the finish line. Thousands of spectators cheer as a sense of elation washes over me. I savor the feeling. But then, the image slowly fades away and my true surroundings come into focus. I am lying in a dark room with my head held firmly in place, inside an MRI scanner. While this might typically be unpleasant, I am a willing research study participant and am eagerly anticipating what comes next. I hold my breath as I stare at the bar on the computer screen representing my brain activity. Then the bar jumps. My fantasy of winning a race had caused the “motivation center” of my brain to surge with activity. I am participating in a study about neurofeedback, a diverse and fascinating area of research that combines neuroscience and technology to monitor and modulate brain activity in real time. My colleagues, Katie Dickerson and Jeff MacInnes, in the Adcock Lab at Duke University, are studying whether people can train themselves to increase brain activity in a tiny region of the brain called the VTA. Notably, the VTA is thought to be involved in motivation—the desire to get something that you want. For example, if I told you that by buying a lottery ticket you would be guaranteed to win $1,000,000, you would probably be very motivated to buy the ticket and would have a spike in brain activity in this region of your brain. But while studies have shown that motivation for external rewards (like money) activate the VTA, until now, we didn’t know whether people could internally generate a motivational state that would activate this brain region. To see if people can self-activate the VTA, my colleagues are using neurofeedback, which falls under the broader umbrella of biofeedback. © 2016 Scientific American
By JAMES GORMAN Bees find nectar and tell their hive-mates; flies evade the swatter; and cockroaches seem to do whatever they like wherever they like. But who would believe that insects are conscious, that they are aware of what’s going on, not just little biobots? Neuroscientists and philosophers apparently. As scientists lean increasingly toward recognizing that nonhuman animals are conscious in one way or another, the question becomes: Where does consciousness end? Andrew B. Barron, a cognitive scientist, and Colin Klein, a philosopher, at Macquarie University in Sydney, Australia, propose in Proceedings of the National Academy of Sciences that insects have the capacity for consciousness. This does not mean that a honeybee thinks, “Why am I not the queen?” or even, “Oh, I like that nectar.” But, Dr. Barron and Dr. Klein wrote in a scientific essay, the honeybee has the capacity to feel something. Their claim stops short of some others. Christof Koch, the president and chief scientific officer of the Allen Institute for Brain Science in Seattle, and Giulio Tononi, a neuroscientist and psychiatrist at the University of Wisconsin, have proposed that consciousness is nearly ubiquitous in different degrees, and can be present even in nonliving arrangements of matter, to varying degrees. They say that rather than wonder how consciousness arises, one should look at where we know it exists and go from there to where else it might exist. They conclude that it is an inherent property of physical systems in which information moves around in a certain way — and that could include some kinds of artificial intelligence and even naturally occurring nonliving matter. © 2016 The New York Times Company
Link ID: 22118 - Posted: 04.19.2016
By Stephen L. Macknik, Susana Martinez-Conde The renowned Slydini holds up an empty box for all to see. It is not really a box—just four connected cloth-covered cardboard walls, forming a floppy parallelogram with no bottom or top. Yet when the magician sets it down on a table, it looks like an ordinary container. Now he begins to roll large yellow sheets of tissue paper into balls. He claps his hands—SMACK!—as he crumples each new ball in a fist and then straightens his arm, wordlessly compelling the audience to gaze after his closed hand. He opens it, and ... the ball is still there. Nothing happened. Huh. Slydini's hand closes once more around the tissue, and it starts snaking around, slowly and gracefully, like a belly dancer's. The performance is mesmerizing. With his free hand, he grabs an imaginary pinch of pixie dust from the box to sprinkle on top of the other hand. This time he opens his hand to reveal that the tissue is gone! Four balls disappear in this fashion. Then, for the finale, Slydini tips the box forward and shows the impossible: all four balls have mysteriously reappeared inside. Slydini famously performed this act on The Dick Cavett Show in 1978. It was one of his iconic tricks. Despite the prestidigitator's incredible showmanship, though, the sleight only works because your brain cannot multitask. © 2016 Scientific American,
Link ID: 22114 - Posted: 04.19.2016
By JEFFREY M. ZACKS and REBECCA TREIMAN OUR favorite Woody Allen joke is the one about taking a speed-reading course. “I read ‘War and Peace’ in 20 minutes,” he says. “It’s about Russia.” The promise of speed reading — to absorb text several times faster than normal, without any significant loss of comprehension — can indeed seem too good to be true. Nonetheless, it has long been an aspiration for many readers, as well as the entrepreneurs seeking to serve them. And as the production rate for new reading matter has increased, and people read on a growing array of devices, the lure of speed reading has only grown stronger. The first popular speed-reading course, introduced in 1959 by Evelyn Wood, was predicated on the idea that reading was slow because it was inefficient. The course focused on teaching people to make fewer back-and-forth eye movements across the page, taking in more information with each glance. Today, apps like SpeedRead With Spritz aim to minimize eye movement even further by having a digital device present you with a stream of single words one after the other at a rapid rate. Unfortunately, the scientific consensus suggests that such enterprises should be viewed with suspicion. In a recent article in Psychological Science in the Public Interest, one of us (Professor Treiman) and colleagues reviewed the empirical literature on reading and concluded that it’s extremely unlikely you can greatly improve your reading speed without missing out on a lot of meaning. Certainly, readers are capable of rapidly scanning a text to find a specific word or piece of information, or to pick up a general idea of what the text is about. But this is skimming, not reading. We can definitely skim, and it may be that speed-reading systems help people skim better. Some speed-reading systems, for example, instruct people to focus only on the beginnings of paragraphs and chapters. This is probably a good skimming strategy. Participants in a 2009 experiment read essays that had half the words covered up — either the beginning of the essay, the end of the essay, or the beginning or end of each individual paragraph. Reading half-paragraphs led to better performance on a test of memory for the passage’s meaning than did reading only the first or second half of the text, and it worked as well as skimming under time pressure. © 2016 The New York Times Company
By Matthew Hutson Bad news for believers in clairvoyance. Our brains appear to rewrite history so that the choices we make after an event seem to precede it. In other words, we add loops to our mental timeline that let us feel we can predict things that in reality have already happened. Adam Bear and Paul Bloom at Yale University conducted some simple tests on volunteers. In one experiment, subjects looked at white circles and silently guessed which one would turn red. Once one circle had changed colour, they reported whether or not they had predicted correctly. Over many trials, their reported accuracy was significantly better than the 20 per cent expected by chance, indicating that the volunteers either had psychic abilities or had unwittingly played a mental trick on themselves. The researchers’ study design helped explain what was really going on. They placed different delays between the white circles’ appearance and one of the circles turning red, ranging from 50 milliseconds to one second. Participants’ reported accuracy was highest – surpassing 30 per cent – when the delays were shortest. That’s what you would expect if the appearance of the red circle was actually influencing decisions still in progress. This suggests it’s unlikely that the subjects were merely lying about their predictive abilities to impress the researchers. The mechanism behind this behaviour is still unclear. It’s possible, the researchers suggest, that we perceive the order of events correctly – one circle changes colour before we have actually made our prediction – but then we subconsciously swap the sequence in our memories so the prediction seems to come first. Such a switcheroo could be motivated by a desire to feel in control of our lives. © Copyright Reed Business Information Ltd.
Link ID: 22109 - Posted: 04.16.2016
By Simon Makin Everyone's brain is different. Until recently neuroscience has tended to gloss this over by averaging results from many brain scans in trying to elicit general truths about how the organ works. But in a major development within the field researchers have begun documenting how brain activity differs between individuals. Such differences had been largely thought of as transient and uninteresting but studies are starting to show that they are innate properties of people's brains, and that knowing them better might ultimately help treat neurological disorders. The latest study, published April 8 in Science, found that the brain activity of individuals who were just biding their time in a brain scanner contained enough information to predict how their brains would function during a range of ordinary activities. The researchers used these at-rest signatures to predict which regions would light up—which groups of brain cells would switch on—during gambling, reading and other tasks they were asked to perform in the scanner. The technique might be used one day to assess whether certain areas of the brains of people who are paralyzed or in a comatose state are still functional, the authors say. The study capitalizes on a relatively new method of brain imaging that looks at what is going on when a person essentially does nothing. The technique stems from the mid-1990s work of biomedical engineer Bharat Biswal, now at New Jersey Institute of Technology. Biswal noticed that scans he had taken while participants were resting in a functional magnetic resonance imaging (fMRI) scanner displayed orderly, low-frequency oscillations. He had been looking for ways to remove background noise from fMRI signals but quickly realized these oscillations were not noise. His work paved the way for a new approach known as resting-state fMRI. © 2016 Scientific American
Zoe Cormier Researchers have published the first images showing the effects of LSD on the human brain, as part of a series of studies to examine how the drug causes its characteristic hallucinogenic effects1. David Nutt, a neuropsychopharmacologist at Imperial College London who has previously examined the neural effects of mind-altering drugs such as the hallucinogen psilocybin, found in magic mushrooms, was one of the study's leaders. He tells Nature what the research revealed, and how he hopes LSD (lysergic acid diethylamide) might ultimately be useful in therapies. Why study the effects of LSD on the brain? For brain researchers, studying how psychedelic drugs such as LSD alter the ‘normal’ brain state is a way to study the biological phenomenon that is consciousness. We ultimately would also like to see LSD deployed as a therapeutic tool. The idea has old roots. In the 1950s and 60s thousands of people took LSD for alcoholism; in 2012, a retrospective analysis of some of these studies suggested that it helped cut down on drinking. Since the 1970s there have been lots of studies with LSD on animals, but not on the human brain. We need that data to validate the trial of this drug as a potential therapy for addiction or depression. Why hasn’t anyone done brain scans before? Before the 1960s, LSD was studied for its potential therapeutic uses, as were other hallucinogens. But the drug was heavily restricted in the UK, the United States and around the world after 1967 — in my view, due to unfounded hysteria over its potential dangers. The restrictions vary worldwide, but in general, countries have insisted that LSD has ‘no medical value’, making it tremendously difficult to work with. © 2016 Nature Publishing Group
By Sandhya Somashekhar African Americans are routinely under-treated for their pain compared with whites, according to research. A study released Monday sheds some disturbing light on why that might be the case. Researchers at the University of Virginia quizzed white medical students and residents to see how many believed inaccurate and at times "fantastical" differences about the two races -- for example, that blacks have less sensitive nerve endings than whites or that black people's blood coagulates more quickly. They found that fully half thought at least one of the false statements presented was possibly, probably or definitely true. Moreover, those who held false beliefs often rated black patients' pain as lower than that of white patients and made less appropriate recommendations about how they should be treated. The study, published in the Proceedings of the National Academy of Sciences, could help illuminate one of the most vexing problems in pain treatment today: That whites are more likely than blacks to be prescribed strong pain medications for equivalent ailments. A 2000 study out of Emory University found that at a hospital emergency department in Atlanta, 74 percent of white patients with bone fractures received painkillers compared with 50 percent of black patients. Similarly, a paper last year found that black children with appendicitis were less likely to receive pain medication than their white counterparts. And a 2007 study found that physicians were more likely to underestimate the pain of black patients compared with other patients.
Noah Smith, ( How do human beings behave in response to risk? That is one of the most fundamental unanswered questions of our time. A general theory of decision-making amid uncertainty would be the kind of scientific advance that comes only a few times a century. Risk is central to financial and insurance markets. It affects the consumption, saving and business investment that moves the global economy. Understanding human behavior in the face of risk would let us reduce accidents, retire more comfortably, get cheaper health insurance and maybe even avoid recessions. A number of our smartest scientists have tried to develop a general theory of risk behavior. John von Neumann, the pioneering mathematician and physicist, took a crack at it back in 1944, when he developed the theory of expected utility along with Oskar Morgenstern. According to this simple theory, people value a possible outcome by multiplying the probability that something happens by the amount they would like it to happen. This beautiful idea underlies much of modern economic theory, but unfortunately it doesn't work well in most situations. Alternative theories have been developed for specific applications. The psychologist Daniel Kahneman won a Nobel Prize for the creation of prospect theory, which says -- among other things -- that people measure outcomes relative to a reference point. That theory does a great job of explaining the behavior of subjects in certain lab experiments, and can help account for the actions of certain inexperienced consumers. But it is very difficult to apply generally, because the reference points are hard to predict in advance and may shift in unpredictable ways.
By PAM BELLUCK When people make risky decisions, like doubling down in blackjack or investing in volatile stocks, what happens in the brain? Scientists have long tried to understand what makes some people risk-averse and others risk-taking. Answers could have implications for how to treat, curb or prevent destructively risky behavior, like pathological gambling or drug addiction. Now, a study by Dr. Karl Deisseroth, a prominent Stanford neuroscientist and psychiatrist, and his colleagues gives some clues. The study, published Wednesday in the journal Nature, reports that a specific type of neuron or nerve cell, in a certain brain region helps galvanize whether or not a risky choice is made. The study was conducted in rats, but experts said it built on research suggesting the findings could be similar in humans. If so, they said, it could inform approaches to addiction, which involves some of the same neurons and brain areas, as well as treatments for Parkinson’s disease because one class of Parkinson’s medications turns some patients into problem gamblers. In a series of experiments led by Kelly Zalocusky, a doctoral student, researchers found that a risk-averse rat made decisions based on whether its previous choice involved a loss (in this case, of food). Rats whose previous decision netted them less food were prompted to behave conservatively next time by signals from certain receptors in a brain region called the nucleus accumbens, the scientists discovered. These receptors, which are proteins attached to neurons, are part of the dopamine system, a neurochemical important to emotion, movement and thinking. In risk-taking rats, however, those receptors sent a much fainter signal, so the rats kept making high-stakes choices even if they lost out. But by employing optogenetics, a technique that uses light to manipulate neurons, the scientists stimulated brain cells with those receptors, heightening the “loss” signal and turning risky rats into safer rats. © 2016 The New York Times Company
By Daniel Barron It’s unnerving when someone with no criminal record commits a disturbingly violent crime. Perhaps he stabs his girlfriend 40 times and dumps her body in the desert. Perhaps he climbs to the top of a clock tower and guns down innocent passers-by. Or perhaps he climbs out of a car at a stoplight and nearly decapitates an unsuspecting police officer with 26 rounds from an assault rifle. Perhaps he even drowns his own children. Or shoots the President of the United States. The shock is palpable (NB: those are all actual cases). The very notion that someone—our neighbor, the guy ahead of us in the check-out line, we (!)—could do something so terrible rubs at our minds. We wonder, “What happened? What in this guy snapped?” After all, for the last 20 years, the accused went home to his family after work—why did he go rob that liquor store? What made him pull that trigger? The subject hit home for me this week when I was called to jury duty. As I made my way to the county courthouse, I wondered whether I would be asked to decide a capital murder case like the ones above. As a young neuroscientist, the prospect made me uneasy. At the trial, the accused’s lawyers would probably argue that, at the time of the crime, he had diminished capacity to make decisions, that somehow he wasn’t entirely free to choose whether or not to commit the crime. They might cite some form of neuroscientific evidence to argue that, at the time of the crime, his brain wasn’t functioning normally. And the jury and judge have to decide what to make of it. © 2016 Scientific American
Link ID: 22024 - Posted: 03.24.2016
Giant manta rays have been filmed checking out their reflections in a way that suggests they are self-aware. Only a small number of animals, mostly primates, have passed the mirror test, widely used as a tentative test of self-awareness. “This new discovery is incredibly important,” says Marc Bekoff, of the University of Colorado in Boulder. “It shows that we really need to expand the range of animals we study.” But not everyone is convinced that the new study proves conclusively that manta rays, which have the largest brains of any fish, can do this – or indeed, that the mirror test itself is an appropriate measure of self-awareness. Csilla Ari, of the University of South Florida in Tampa, filmed two giant manta rays in a tank, with and without a mirror inside.The fish changed their behaviour in a way that suggested that they recognised the reflections as themselves as opposed to another manta ray. They did not show signs of social interaction with the image, which is what you would expect if they perceived it to be another individual. Instead, the rays repeatedly moved their fins and circled in front of the mirror (click on image below to see one in action). This suggests they could see whether their reflection moved when they moved. The frequency of these movements was much higher when the mirror was in the tank than when it was not. manta © Copyright Reed Business Information Ltd.
By BARBARA K. LIPSKA AS the director of the human brain bank at the National Institute of Mental Health, I am surrounded by brains, some floating in jars of formalin and others icebound in freezers. As part of my work, I cut these brains into tiny pieces and study their molecular and genetic structure. My specialty is schizophrenia, a devastating disease that often makes it difficult for the patient to discern what is real and what is not. I examine the brains of people with schizophrenia whose suffering was so acute that they committed suicide. I had always done my work with great passion, but I don’t think I really understood what was at stake until my own brain stopped working. In the first days of 2015, I was sitting at my desk when something freakish happened. I extended my arm to turn on the computer, and to my astonishment realized that my right hand disappeared when I moved it to the right lower quadrant of the keyboard. I tried again, and the same thing happened: The hand disappeared completely as if it were cut off at the wrist. It felt like a magic trick — mesmerizing, and totally inexplicable. Stricken with fear, I kept trying to find my right hand, but it was gone. I had battled breast cancer in 2009 and melanoma in 2012, but I had never considered the possibility of a brain tumor. I knew immediately that this was the most logical explanation for my symptoms, and yet I quickly dismissed the thought. Instead I headed to a conference room. My colleagues and I had a meeting scheduled to review our new data on the molecular composition of schizophrenia patients’ frontal cortex, a brain region that shapes who we are — our thoughts, emotions, memories. But I couldn’t focus on the meeting because the other scientists’ faces kept vanishing. Thoughts about a brain tumor crept quietly into my consciousness again, then screamed for attention. © 2016 The New York Times Company
How is the brain able to use past experiences to guide decision-making? A few years ago, researchers supported by the National Institutes of Health discovered in rats that awake mental replay of past experiences is critical for learning and making informed choices. Now, the team has discovered key secrets of the underlying brain circuitry – including a unique system that encodes location during inactive periods. “Advances such as these in understanding cellular and circuit-level processes underlying such basic functions as executive function, social cognition, and memory fit into NIMH’s mission of discovering the roots of complex behaviors,” said NIMH acting director Bruce Cuthbert, Ph.D. While a rat is moving through a maze — or just mentally replaying the experience — an area in the brain’s memory hub, or hippocampus, specialized for locations, called CA1, communicates with a decision-making area in the executive hub or prefrontal cortex (PFC). A distinct subset of PFC neurons excited during mental replay of the experience are activated during movement, while another distinct subset, less engaged during movement in the maze – and therefore potentially distracting – are inhibited during replay. “Such strongly coordinated activity within this CA1-PFC circuit during awake replay is likely to optimize the brain’s ability to consolidate memories and use them to decide on future action” explained Shantanu Jadhav, Ph.D. (link is external), now an assistant professor at Brandeis University, Waltham, MA., the study’s co-first author. His contributions to this line of research were made possible, in part, by a Pathway to Independence award from the Office of Research Training and Career Development of the NIH’s National Institute of Mental Health (NIMH).
By Kj Dell’Antonia New research shows that the youngest students in a classroom are more likely to be given a diagnosis of attention deficit hyperactivity disorder than the oldest. The findings raise questions about how we regard those wiggly children who just can’t seem to sit still – and who also happen to be the youngest in their class. Researchers in Taiwan looked at data from 378,881 children ages 4 to 17 and found that students born in August, the cut-off month for school entry in that country, were more likely to be given diagnoses of A.D.H.D. than students born in September. The children born in September would have missed the previous year’s cut-off date for school entry, and thus had nearly a full extra year to mature before entering school. The findings were published Thursday in The Journal of Pediatrics. While few dispute that A.D.H.D. is a legitimate disability that can impede a child’s personal and school success and that treatment can be effective, “our findings emphasize the importance of considering the age of a child within a grade when diagnosing A.D.H.D. and prescribing medication for treating A.D.H.D.,” the authors concluded. Dr. Mu-Hong Chen, a member of the department of psychiatry at Taipei Veterans General Hospital in Taiwan and the lead author of the study, hopes that a better understanding of the data linking relative age at school entry to an A.D.H.D. diagnosis will encourage parents, teachers and clinicians to give the youngest children in a grade enough time and help to allow them to prove their ability. Other research has shown similar results. An earlier study in the United States, for example, found that roughly 8.4 percent of children born in the month before their state’s cutoff date for kindergarten eligibility are given A.D.H.D. diagnoses, compared to 5.1 percent of children born in the month immediately afterward. © 2016 The New York Times Company
By Daniel Engber Nearly 20 years ago, psychologists Roy Baumeister and Dianne Tice, a married couple at Case Western Reserve University, devised a foundational experiment on self-control. “Chocolate chip cookies were baked in the room in a small oven,” they wrote in a paper that has been cited more than 3,000 times. “As a result, the laboratory was filled with the delicious aroma of fresh chocolate and baking.” Here’s how that experiment worked. Baumeister and Tice stacked their fresh-baked cookies on a plate, beside a bowl of red and white radishes, and brought in a parade of student volunteers. They told some of the students to hang out for a while unattended, eating only from the bowl of radishes, while another group ate only cookies. Afterward, each volunteer tried to solve a puzzle, one that was designed to be impossible to complete. Baumeister and Tice timed the students in the puzzle task, to see how long it took them to give up. They found that the ones who’d eaten chocolate chip cookies kept working on the puzzle for 19 minutes, on average—about as long as people in a control condition who hadn’t snacked at all. The group of kids who noshed on radishes flubbed the puzzle test. They lasted just eight minutes before they quit in frustration. The authors called this effect “ego depletion” and said it revealed a fundamental fact about the human mind: We all have a limited supply of willpower, and it decreases with overuse. © 2016 The Slate Group LLC.
Link ID: 21965 - Posted: 03.08.2016
Angus Chen We know we should put the cigarettes away or make use of that gym membership, but in the moment, we just don't do it. There is a cluster of neurons in our brain critical for motivation, though. What if you could hack them to motivate yourself? These neurons are located in the middle of the brain, in a region called the ventral tegmental area. A paper published Thursday in the journal Neuron suggests that we can activate the region with a little bit of training. The researchers stuck 73 people into an fMRI, a scanner that can detect what part of the brain is most active, and focused on that area associated with motivation. When the researchers said "motivate yourself and make this part of your brain light up," people couldn't really do it. "They weren't that reliable when we said, 'Go! Get psyched. Turn on your VTA,' " says Dr. Alison Adcock, a psychiatrist at Duke and senior author on the paper. That changed when the participants were allowed to watch a neurofeedback meter that displayed activity in their ventral tegmental area. When activity ramps up, the participants see the meter heat up while they're in the fMRI tube. "Your whole mind is allowed to speak to a specific part of your brain in a way you never imagined before. Then you get feedback that helps you discover how to turn that part of the brain up or down," says John Gabrieli, a neuroscientist at the Massachusetts Institute of Technology who was not involved with the work. © 2016 npr
Link ID: 21954 - Posted: 03.05.2016