Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Judith Berck The 73-year-old widow came to see Dr. David Goodman, an assistant professor in the psychiatry and behavioral sciences department at Johns Hopkins School of Medicine, after her daughter had urged her to “see somebody” for her increasing forgetfulness. She was often losing her pocketbook and keys and had trouble following conversations, and 15 minutes later couldn’t remember much of what was said. But he did not think she had early Alzheimer’s disease. The woman’s daughter and granddaughter had both been given a diagnosis of A.D.H.D. a few years earlier, and Dr. Goodman, who is also the director of a private adult A.D.H.D. clinical and research center outside of Baltimore, asked about her school days as a teenager. “She told me: ‘I would doodle because I couldn’t pay attention to the teacher, and I wouldn’t know what was going on. The teacher would move me to the front of the class,’ ” Dr. Goodman said, After interviewing her extensively, noting the presence of patterns of impairment that spanned the decades, Dr. Goodman diagnosed A.D.H.D. He prescribed Vyvanse, a short-acting stimulant of the central nervous system. A few weeks later, the difference was remarkable. “She said: ‘I’m surprised, because I’m not misplacing my keys now, and I can remember things better. My mind isn’t wandering off, and I can stay in a conversation. I can do something until I finish it,’ ” Dr. Goodman said. Once seen as a disorder affecting mainly children and young adults, attention deficit hyperactivity disorder is increasingly understood to last throughout one’s lifetime. © 2015 The New York Times Company
HOW would you punish a murderer? Your answer will depend on how active a certain part of your brain happens to be. Joshua Buckholtz at the University of Harvard and his colleagues gave 66 volunteers scenarios involving a fictitious criminal called John. Some of his crimes were planned. In others, he was experiencing psychosis or distress – for example, his daughter’s life under threat. The volunteers had to decide how responsible John was for each crime and the severity of his punishment on a scale of 0 to 9. Before hearing the stories, some of the volunteers received magnetic stimulation to a brain region involved in decision-making, called the dorsolateral prefrontal cortex (DLPFC), which dampened its activity. The others were given a sham treatment. Inhibiting the DLPFC didn’t affect how responsible the volunteers thought John was for the crimes, or the punishment he should receive when he was not culpable for his actions. But they meted out a much less severe punishment than the control group when John had planned his crime (Neuron, doi.org/7rh). “By altering one process in the brain, we can alter our judgements,” says Christian Ruff at the Swiss Federal Institute of Technology in Zurich. In the justice system, the judgment stage to determine guilt is separated from sentencing, says James Tabery at the University of Utah. “It turns out that our brains work in a similar fashion.” © Copyright Reed Business Information Ltd.
By Martin Enserink AMSTERDAM—Is being a woman a disadvantage when you're applying for grant money in the Netherlands? Yes, say the authors of a paper published by the Proceedings of the National Academy of Sciences (PNAS) this week. The study showed that women have a lower chance than men of winning early career grants from the Netherlands Organization for Scientific Research (NWO), the country's main grant agency. NWO, which commissioned the study, accepted the results and announced several changes on Monday to rectify the problem. "NWO will devote more explicit attention to the gender awareness of reviewers in its methods and procedures," a statement said. But several Dutch scientists who have taken a close look at the data say they see no evidence of sexism. The PNAS paper, written by Romy van der Lee and Naomi Ellemers of Leiden University's Institute of Psychology, is an example of a classic statistical trap, says statistician Casper Albers of the University of Groningen, who tore the paper apart in a blog post yesterday. (In Dutch; a shortened translation in English is here.) Albers says he plans to send the piece as a commentary to PNAS as well. Van der Lee and Ellemers analyzed 2823 applications for NWO's Veni grants for young researchers in the years 2010, 2011, and 2012. Overall, women had a success rate of 14.9%, compared with 17.7% for men, they wrote, and that difference was statistically significant. But Albers says the difference evaporates if you look more closely at sex ratios and success rates in NWO's nine scientific disciplines. Those data, which Van der Lee and Ellemers provided in a supplement to their paper, show that women simply apply more often in fields where the chance of success is low. © 2015 American Association for the Advancement of Science
By Simon Makin Most people associate the term “subliminal conditioning” with dystopian sci-fi tales, but a recent study has used the technique to alter responses to pain. The findings suggest that information that does not register consciously teaches our brain more than scientists previously suspected. The results also offer a novel way to think about the placebo effect. Our perception of pain can depend on expectations, which explains placebo pain relief—and placebo's evil twin, the nocebo effect (if we think something will really hurt, it can hurt more than it should). Researchers have studied these expectation effects using conditioning techniques: they train people to associate specific stimuli, such as certain images, with different levels of pain. The subjects' perception of pain can then be reduced or increased by seeing the images during something painful. Most researchers assumed these pain-modifying effects required conscious expectations, but the new study, from a team at Harvard Medical School and the Karolinska Institute in Stockholm, led by Karin Jensen, shows that even subliminal input can modify pain—a more cognitively complex process than most that have previously been discovered to be susceptible to subliminal effects (timeline below). The scientists conditioned 47 people to associate two faces with either high or low pain levels from heat applied to their forearm. Some participants saw the faces normally, whereas others were exposed subliminally—the images were flashed so briefly, the participants were not aware of seeing them, as verified by recognition tests. © 2015 Scientific American
William Sutcliffe Most epidemics are the result of a contagious disease. ADHD – Attention Deficit Hyperactivity Disorder – is not contagious, and it may not even be a genuine malady, but it has acquired the characteristics of an epidemic. New data has revealed that UK prescriptions for Ritalin and other similar ADHD medications have more than doubled in the last decade, from 359,100 in 2004 to 922,200 last year. In America, the disorder is now the second most frequent long-term diagnosis made in children, narrowly trailing asthma. It generates pharmaceutical sales worth $9bn (£5.7bn) per year. Yet clinical proof of ADHD as a genuine illness has never been found. Sami Timimi, consultant child psychiatrist at Lincolnshire NHS Trust and visiting professor of child psychiatry, is a vocal critic of the Ritalin-friendly orthodoxy within the NHS. While he is at pains to stress that he is “not saying those who have the diagnosis don’t have any problem”, he is adamant that “there is no robust evidence to demonstrate that what we call ADHD correlates with any known biological or neurological abnormality”. The hyperactivity, inattentiveness and lack of impulse control that are at the heart of an ADHD diagnosis are, according to Timimi, simply “a collection of behaviours”. Any psychiatrist who claims that a behaviour is being caused by ADHD is perpetrating a “philosophical tautology” – he is doing nothing more than telling you that hyperactivity is caused by an alternative name for hyperactivity. There is still no diagnostic test – no marker in the body – that can identify a person with ADHD. The results of more than 40 brain scan studies are described by Timimi as “consistently inconsistent”. No conclusive pattern in brain activity had been found to explain or identify ADHD. © independent.co.uk
Link ID: 21425 - Posted: 09.21.2015
By AMY HARMON Some neuroscientists believe it may be possible, within a century or so, for our minds to continue to function after death — in a computer or some other kind of simulation. Others say it’s theoretically impossible, or impossibly far off in the future. A lot of pieces have to fall into place before we can even begin to start thinking about testing the idea. But new high-tech efforts to understand the brain are also generating methods that make those pieces seem, if not exactly imminent, then at least a bit more plausible. Here’s a look at how close, and far, we are to some requirements for this version of “mind uploading.” The hope of mind uploading rests on the premise that much of the key information about who we are is stored in the unique pattern of connections between our neurons, the cells that carry electrical and chemical signals through living brains. You wouldn't know it from the outside, but there are more of those connections — individually called synapses, collectively known as the connectome — in a cubic centimeter of the human brain than there are stars in the Milky Way galaxy. The basic blueprint is dictated by our genes, but everything we do and experience alters it, creating a physical record of all the things that make us US — our habits, tastes, memories, and so on. It is exceedingly tricky to transition that pattern of connections, known as the connectome, into a state where it is both safe from decay and can be verified as intact. But in recent months, two sets of scientists said they had devised separate ways to do that for the brains of smaller mammals. If either is scaled up to work for human brains — still a big if — then theoretically your brain could sit on a shelf or in a freezer for centuries while scientists work on the rest of these steps. © 2015 The New York Times Company
Neel V. Patel The concept of the insanity defense dates back to ancient Greece and the Roman Empire. The idea has always been the same: Protect individuals from being held accountable for behavior they couldn’t control. Yet there have been more than a few historical and recent instances of a judge or jury issuing a controversial “by reason of…” verdict. What was intended as a human rights effort has become a last-ditch way to save killers (though it didn’t work for James Holmes). The question that hangs in the air at these sort of proceedings has always been the same: Is there a way to make determinations more scientific and less traditionally judicial? Adam Shniderman, a criminal justice researcher at Texas Christian University, has been studying the role of neuroscience in the court system for several years now. He explains that neurological data and explanations don’t easily translate into the world of lawyers and legal text. Inverse spoke with Shniderman to learn more about how neuroscience is used in today’s insanity defenses, and whether this is likely to change as the technology used to observe the brain gets better and better. Can you give me a quick overview of how the role of neuroscience in the courts, has changed over the years? Especially in the last few decades with new advances in technology. Obviously, [neuroscientific evidence] has become more widely used as brain-scanning technology has gotten better. Some of the scanning technology we use now, like functional MRI that measures blood oxygenation as a proxy for neurological activity, is relatively new within the last 20 years or so. The nature of brain scanning has changed, but the knowledge that the brain influences someone’s actions is not new.
By Steve Mirsky It's nice to know that the great man we celebrate in this special issue had a warm sense of humor. For example, in 1943 Albert Einstein received a letter from a junior high school student who mentioned that her math class was challenging. He wrote back, “Do not worry about your difficulties in mathematics; I can assure you that mine are still greater.” Today we know that his sentiment could also have been directed at crows, which are better at math than those members of various congressional committees that deal with science who refuse to acknowledge that global temperatures keep getting higher. Studies show that crows can easily discriminate between a group of, say, three objects and another containing nine. They have more trouble telling apart groups that are almost the same size, but unlike the aforementioned committee members, at least they're trying. A study in the Proceedings of the National Academy of Sciences USA finds that the brain of a crow has nerve cells that specialize in determining numbers—a method quite similar to what goes on in our primate brain. Human and crow brains are substantially different in size and organization, but convergent evolution seems to have decided that this kind of neuron-controlled numeracy is a good system. (Crows are probably unaware of evolution, which is excusable. Some members of various congressional committees that deal with science pad their reactionary résumés by not accepting evolution, which is astonishing.) © 2015 Scientific American
Mo Costandi In an infamous set of experiments performed in the 1960s, psychologist Walter Mischel sat pre-school kids at a table, one by one, and placed a sweet treat – a small marshmallow, a biscuit, or a pretzel – in front of them. Each of the young participants was told that they would be left alone in the room, and that if they could resist the temptation to eat the sweet on the table in front of them, they would be rewarded with more sweets when the experimenter returned. The so-called Marshmallow Test was designed to test self-control and delayed gratification. Mischel and his colleagues tracked some of the children as they grew up, and then claimed that those who managed to hold out for longer in the original experiment performed better at school, and went on to become more successful in life, than those who couldn’t resist the temptation to eat the treat before the researcher returned to the room. The ability to exercise willpower and inhibit impulsive behaviours is considered to be a core feature of the brain’s executive functions, a set of neural processes - including attention, reasoning, and working memory - which regulate our behaviour and thoughts, and enable us to adapt them according to the changing demands of the task at hand. Executive function is a rather vague term, and we still don’t know much about its underlying bran mechanisms, or about how different components of this control system are related to one another. New research shows that self-control and memory share, and compete with each other for, the same brain mechanisms, such that exercising willpower saps these common resources and impairs our ability to encode memories. © 2015 Guardian News and Media Limited
Shankar Vedantam Girls often outperform boys in science and math at an early age but are less likely to choose tough courses in high school. An Israeli experiment demonstrates how biases of teachers affect students. RENEE MONTAGNE, HOST: At early ages, girls often outperform boys in math and science classes. Later, something changes. By the time they get into high school, girls are less likely than boys to take difficult math courses and less likely, again, to go into careers in science, technology, engineering or medicine. To learn more about this, David Greene spoke with NPR social science correspondent Shankar Vedantam. SHANKAR VEDANTAM, BYLINE: Well, the new study suggests, David, that some of these outcomes might be driven by the unconscious biases of elementary school teachers. What's remarkable about the new work is it doesn't just theorize about the gender gap, it actually has very hard evidence. Edith Sand at Tel Aviv University and her colleague, Victor Lavy, analyzed the math test scores of about 3,000 students in Tel Aviv. When the students were in sixth grade, the researchers got two sets of math test scores. One set of scores were given by the classroom teachers, who obviously knew the children whom they were grading. The second set of scores were from external teachers who did not know if the children they were grading were either boys or girls. So the external teachers were blind to the gender of the children. © 2015 NPR
By LISA FELDMAN BARRETT Boston — IS psychology in the midst of a research crisis? An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments and found that over 60 percent of them failed to replicate — that is, their findings did not hold up the second time around. The results, published last week in Science, have generated alarm (and in some cases, confirmed suspicions) that the field of psychology is in poor shape. But the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works. Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate. Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test. A number of years ago, for example, scientists conducted an experiment on fruit flies that appeared to identify the gene responsible for curly wings. The results looked solid in the tidy confines of the lab, but out in the messy reality of nature, where temperatures and humidity varied widely, the gene turned out not to reliably have this effect. In a simplistic sense, the experiment “failed to replicate.” But in a grander sense, as the evolutionary biologist Richard Lewontin has noted, “failures” like this helped teach biologists that a single gene produces different characteristics and behaviors, depending on the context. © 2015 The New York Times Company
Link ID: 21369 - Posted: 09.01.2015
By James Gallagher Health editor, BBC News website Close your eyes and imagine walking along a sandy beach and then gazing over the horizon as the Sun rises. How clear is the image that springs to mind? Most people can readily conjure images inside their head - known as their mind's eye. But this year scientists have described a condition, aphantasia, in which some people are unable to visualise mental images. Niel Kenmuir, from Lancaster, has always had a blind mind's eye. He knew he was different even in childhood. "My stepfather, when I couldn't sleep, told me to count sheep, and he explained what he meant, I tried to do it and I couldn't," he says. "I couldn't see any sheep jumping over fences, there was nothing to count." Our memories are often tied up in images, think back to a wedding or first day at school. As a result, Niel admits, some aspects of his memory are "terrible", but he is very good at remembering facts. And, like others with aphantasia, he struggles to recognise faces. Yet he does not see aphantasia as a disability, but simply a different way of experiencing life. Take the aphantasia test It is impossible to see what someone else is picturing inside their head. Psychologists use the Vividness of Visual Imagery Questionnaire, which asks you to rate different mental images, to test the strength of the mind's eye. The University of Exeter has developed an abridged version that lets you see how your mind compares. © 2015 BBC.
By Elizabeth Kolbert C57BL/6J mice are black, with pink ears and long pink tails. Inbred for the purposes of experimentation, they exhibit a number of infelicitous traits, including a susceptibility to obesity, a taste for morphine, and a tendency to nibble off their cage mates’ hair. They’re also tipplers. Given access to ethanol, C57BL/6J mice routinely suck away until the point that, were they to get behind the wheel of a Stuart Little-size roadster, they’d get pulled over for D.U.I. Not long ago, a team of researchers at Temple University decided to take advantage of C57BL/6Js’ bad habits to test a hunch. They gathered eighty-six mice and placed them in Plexiglas cages, either singly or in groups of three. Then they spiked the water with ethanol and videotaped the results. Half of the test mice were four weeks old, which, in murine terms, qualifies them as adolescents. The other half were twelve-week-old adults. When the researchers watched the videos, they found that the youngsters had, on average, outdrunk their elders. More striking still was the pattern of consumption. Young male C57BL/6Js who were alone drank roughly the same amount as adult males. But adolescent males with cage mates went on a bender; they spent, on average, twice as much time drinking as solo boy mice and about thirty per cent more time than solo girls. The researchers published the results in the journal Developmental Science. In their paper, they noted that it was “not possible” to conduct a similar study on human adolescents, owing to the obvious ethical concerns. But, of course, similar experiments are performed all the time, under far less controlled circumstances. Just ask any college dean. Or ask a teen-ager.
By Simon Worrall, National Geographic How do we know we exist? What is the self? These are some of the questions science writer Anil Ananthaswamy asks in his thought-provoking new book, The Man Who Wasn’t There: Investigations Into the Strange New Science of the Self. The answers, he says, may lie in medical conditions like Cotard’s syndrome, Alzheimer’s or body integrity identity disorder, which causes some people to try and amputate their own limbs. Speaking from Berkeley, California, he explains why Antarctic explorer Ernest Shackleton fell victim to the doppelgänger effect; how neuroscience is rewriting our ideas about identity; and how a song by George Harrison of the Beatles offers a critique of the Western view of the self. You dedicate the book to “those of us who want to let go but wonder, who is letting go and of what?” Explain that statement. We always hear within popular culture that we have to “let go,” as a way of dealing with certain situations in our lives. And in some sense you have to wonder about that statement because the person or thing doing the letting go is also probably what has to be let go. In the book, I am trying to get behind the whole issue of what the self is that has to do the letting go; and what aspects of the self have to be let go of. You start your book with Alzheimer’s. Tell us about the origin of the condition and what it tells us about “the autobiographical self.” Alzheimer’s is a very severe condition, especially during the mid- to late stages, which starts robbing people of their ability to remember anything that’s happening to them. They also start forgetting the people they are close to. © 1996-2015 National Geographic Society
Link ID: 21343 - Posted: 08.27.2015
By NINA STROHMINGER and SHAUN NICHOLS WHEN does the deterioration of your brain rob you of your identity, and when does it not? Alzheimer’s, the neurodegenerative disease that erodes old memories and the ability to form new ones, has a reputation as a ruthless plunderer of selfhood. People with the disease may no longer seem like themselves. Neurodegenerative diseases that target the motor system, like amyotrophic lateral sclerosis, can lead to equally devastating consequences: difficulty moving, walking, speaking and eventually, swallowing and breathing. Yet they do not seem to threaten the fabric of selfhood in quite the same way. Memory, it seems, is central to identity. And indeed, many philosophers and psychologists have supposed as much. This idea is intuitive enough, for what captures our personal trajectory through life better than the vault of our recollections? But maybe this conventional wisdom is wrong. After all, the array of cognitive faculties affected by neurodegenerative diseases is vast: language, emotion, visual processing, personality, intelligence, moral behavior. Perhaps some of these play a role in securing a person’s identity. The challenge in trying in determine what parts of the mind contribute to personal identity is that each neurodegenerative disease can affect many cognitive systems, with the exact constellation of symptoms manifesting differently from one patient to the next. For instance, some Alzheimer’s patients experience only memory loss, whereas others also experience personality change or impaired visual recognition. The only way to tease apart which changes render someone unrecognizable is to compare all such symptoms, across multiple diseases. And that’s just what we did, in a study published this month in Psychological Science. © 2015 The New York Times Company
Link ID: 21331 - Posted: 08.24.2015
Helen Thomson Modafinil is the world’s first safe “smart drug”, researchers at Harvard and Oxford universities have said, after performing a comprehensive review of the drug. They concluded that the drug, which is prescribed for narcolepsy but is increasingly taken without prescription by healthy people, can improve decision- making, problem-solving and possibly even make people think more creatively. While acknowledging that there was limited information available on the effects of long-term use, the reviewers said that the drug appeared safe to take in the short term, with few side effects and no addictive qualities. Modafinil has become increasingly common in universities across Britain and the US. Prescribed in the UK as Provigil, it was licensed in 2002 for use as a treatment for narcolepsy - a brain disorder that can cause a person to suddenly fall asleep at inappropriate times or to experience chronic pervasive sleepiness and fatigue. Used without prescription, and bought through easy-to-find websites, modafinil is what is known as a smart drug - used primarily by people wanting to improve their focus before an exam. A poll of Nature journal readers suggested that one in five have used drugs to improve focus, with 44% stating modafinil as their drug of choice. But despite its increasing popularity, there has been little consensus on the extent of modafinil’s effects in healthy, non-sleep-disordered humans. A new review of 24 of the most recent modafinil studies suggests that the drug has many positive effects in healthy people, including enhancing attention, improving learning and memory and increasing something called “fluid intelligence” - essentially our capacity to solve problems and think creatively. © 2015 Guardian News and Media Limited
By PAUL GLIMCHER and MICHAEL A. LIVERMORE THE United States government recently announced an $18.7 billion settlement of claims against the oil giant BP in connection with the Deepwater Horizon oil rig explosion in April 2010, which dumped millions of barrels of oil into the Gulf of Mexico. Though some of the settlement funds are to compensate the region for economic harm, most will go to environmental restoration in affected states. Is BP getting off easy, or being unfairly penalized? This is not easy to answer. Assigning a monetary value to environmental harm is notoriously tricky. There is, after all, no market for intact ecosystems or endangered species. We don’t reveal how much we value these things in a consumer context, as goods or services for which we will or won’t pay a certain amount. Instead, we value them for their mere existence. And it is not obvious how to put a price tag on that. In an attempt to do so, economists and policy makers often rely on a technique called “contingent valuation,” which amounts to asking individuals survey questions about their willingness to pay to protect natural resources. The values generated by contingent valuation studies are frequently used to inform public policy and litigation. (If the government had gone to trial with BP, it most likely would have relied on such studies to argue for a large judgment against the company.) Contingent valuation has always aroused skepticism. Oil companies, unsurprisingly, have criticized the technique. But many economists have also been skeptical, worrying that hypothetical questions posed to ordinary citizens may not really capture their genuine sense of environmental value. Even the Obama administration seems to discount contingent valuation, choosing to exclude data from this technique in 2014 when issuing a new rule to reduce the number of fish killed by power plants. © 2015 The New York Times Company
By John Danaher Discoveries in neuroscience, and the science of behaviour more generally, pose a challenge to the existence of free will. But this all depends on what is meant by ‘free will’. The term means different things to different people. Philosophers focus on two conditions that seem to be necessary for free will: (i) the alternativism condition, according to which having free will requires the ability to do otherwise; and (ii) the sourcehood condition, according to which having free will requires that you (your ‘self’) be the source of your actions. A scientific and deterministic worldview is often said to threaten the first condition. Does it also threaten the second? That is what Christian List and Peter Menzies article “My brain made me do it: The exclusion argument against free will and what’s wrong with it” tries to figure out. As you might guess from the title, the authors think that the scientific worldview, in particular the advances in neuroscience, do not necessarily threaten the sourcehood condition. I discussed their main argument in the previous post. To briefly recap, they critiqued an argument from physicalism against free will. According to this argument, the mental states which constitute the self do not cause our behaviour because they are epiphenomenal: they supervene on the physical brain states that do all the causal work. List and Menzies disputed this by appealing to a difference-making account of causation. This allowed for the possibility of mental states causing behaviour (being the ‘difference makers’) even if they were supervenient upon underlying physical states.
Link ID: 21279 - Posted: 08.10.2015
April Dembosky Developers of a new video game for your brain say theirs is more than just another get-smarter-quick scheme. Akili, a Northern California startup, insists on taking the game through a full battery of clinical trials so it can get approval from the Food and Drug Administration — a process that will take lots of money and several years. So why would a game designer go to all that trouble when there's already a robust market of consumers ready to buy games that claim to make you smarter and improve your memory? Think about all the ads you've heard for brain games. Maybe you've even passed a store selling them. There's one at the mall in downtown San Francisco — just past the cream puff stand and across from Jamba Juice — staffed on my visit by a guy named Dominic Firpo. "I'm a brain coach here at Marbles: The Brain Store," he says. Brain coach? "Sounds better than sales person," Firpo explains. "We have to learn all 200 games in here and become great sales people so we can help enrich peoples' minds." He heads to the "Word and Memory" section of the store and points to one product that says it will improve your focus and reduce stress in just three minutes a day. "We sold out of it within the first month of when we got it," Firpo says. The market for these "brain fitness" games is worth about $1 billion and is expected to grow to $6 billion in the next five years. Game makers appeal to both the young and the older with the common claim that if you exercise your memory, you'll be able to think faster and be less forgetful. Maybe bump up your IQ a few points. "That's absurd," says psychology professor Randall Engle from the Georgia Institute of Technology. © 2015 NPR
By John Danaher Consider the following passage from Ian McEwan’s novel Atonement. It concerns one of the novel’s characters (Briony) as she philosophically reflects on the mystery of human action: She raised one hand and flexed its fingers and wondered, as she had sometimes done before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. Is Briony’s quest forlorn? Will she ever find herself at the crest of the wave? The contemporary scientific understanding of human action seems to cast this into some doubt. A variety of studies in the neuroscience of action paint an increasingly mechanistic and subconscious picture of human behaviour. According to these studies, our behaviour is not the product of our intentions or desires or anything like that. It is the product of our neural networks and systems, a complex soup of electrochemical interactions, oftentimes operating beneath our conscious awareness. In other words, our brains control our actions; our selves (in the philosophically important sense of the word ‘self’) do not. This discovery — that our brains ‘make us do it’ and that ‘we’ don’t — is thought to have a number of significant social implications, particularly for our practices of blame and punishment.
Link ID: 21276 - Posted: 08.08.2015