Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Anne Trafton | MIT News Office Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match. A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science. Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper. “The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.” In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.
Link ID: 19478 - Posted: 04.12.2014
In an op-ed in the Sunday edition of this newspaper, Barbara Ehrenreich, card-carrying liberal rationalist, writes about her own mystical experiences (the subject of her new book), and argues that the numinous deserves more cutting-edge scientific study: I appreciate the spirit (if you will) of this argument, but I am very doubtful as to its application. The trouble is that in its current state, cognitive science has a great deal of difficulty explaining “what happens” when “those wires connect” for non-numinous experience, which is why mysterian views of consciousness remain so potent even among thinkers whose fundamental commitments are atheistic and materialistic. (I’m going to link to the internet’s sharpest far-left scold for a good recent polemic on this front.) That is to say, even in contexts where it’s very easy to identify the physical correlative to a given mental state, and to get the kind of basic repeatability that the scientific method requires — show someone an apple, ask them to describe it; tell them to bite into it, ask them to describe the taste; etc. — there is no kind of scientific or philosophical agreement on what is actually happening to produce the conscious experience of the color “red,” the conscious experience of the crisp McIntosh taste, etc. So if we can’t say how this ”normal” conscious experience works, even when we can easily identify the physical stimulii that produce it, it seems exponentially harder to scientifically investigate the invisible, maybe-they-exist and maybe-they-don’t stimulii — be they divine, alien, or panpsychic — that Ehrenreich hypothesizes might produce more exotic forms of conscious experience. © 2014 The New York Times Company
If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science
By ANA GANTMAN and JAY VAN BAVEL TAKE a close look at your breakfast. Is that Jesus staring out at you from your toast? Such apparitions can be as lucrative as they are seemingly miraculous. In 2004, a Florida woman named Diane Duyser sold a decade-old grilled cheese sandwich that bore a striking resemblance to the Virgin Mary. She got $28,000 for it on eBay. The psychological phenomenon of seeing something significant in an ambiguous stimulus is called pareidolia. Virgin Mary grilled cheese sandwiches and other pareidolia remind us that almost any object is open to multiple interpretations. Less understood, however, is what drives some interpretations over others. In a forthcoming paper in the journal Cognition, we hope to shed some light on that question. In a series of experiments, we examined whether awareness of perceptually ambiguous stimuli was enhanced by the presence of moral content. We quickly flashed strings of letters on a computer screen and asked participants to indicate whether they believed each string formed a word or not. To ensure that the letter strings were perceptually ambiguous, we flashed them between approximately 40 and 70 milliseconds. (When they were presented for too long, people easily saw all the letter strings and demonstrated close to 100 percent accuracy. When they were presented too quickly, people were unable to see the words and performed “at chance,” around 50 percent accuracy.) Some of the strings of letters we flashed were words, others were not. Importantly, some of the words we flashed had moral content (virtue, steal, God) and others did not (virtual, steel, pet). Over the course of three experiments, we found that participants correctly identified strings of letters as words more often when they formed moral words (69 percent accuracy) than when they formed nonmoral words (65 percent accuracy). This suggested that moral content gave a “boost” to perceptually ambiguous stimuli — a shortcut to conscious awareness. We call this phenomenon the “moral pop-out effect.” © 2014 The New York Times Company
Link ID: 19453 - Posted: 04.07.2014
By Karen Kaplan There are lies, damn lies – and the lies that we tell for the sake of others when we are under the influence of oxytocin. Researchers found that after a squirt of the so-called love hormone, volunteers lied more readily about their results in a game in order to benefit their team. Compared with control subjects who were given a placebo, those on oxytocin told more extreme lies and told them with less hesitation, according to a study published Monday in Proceedings of the National Academy of Sciences. Oxytocin is a brain hormone that is probably best known for its role in helping mothers bond with their newborns. In recent years, scientists have been examining its role in monogamy and in strengthening trust and empathy in social groups. Sometimes, doing what’s good for the group requires lying. (Think of parents who fake their addresses to get their kids into a better school.) A pair of researchers from Ben-Gurion University of the Negev in Israel and the University of Amsterdam figured that oxytocin would play a role in this type of behavior, so they set up a series of experiments to test their hypothesis. The researchers designed a simple computer game that asked players to predict whether a virtual coin toss would wind up heads or tails. After seeing the outcome on a computer screen, players were asked to report whether their prediction was correct or not. In some cases, making the right prediction would earn a player’s team a small payment (the equivalent of about 40 cents). In other cases, a correct prediction would cost the team the same amount, and sometimes there was no payoff or cost. Los Angeles Times Copyright 2014
By BRAYDEN KING and JERRY KIM THIS season Major League Baseball is allowing its officiating crews to use instant replay to review certain critical calls, including home runs, force plays and foul balls. But the calling of the strike zone — determining whether a pitch that is not swung at is a ball or a strike — will still be left completely to the discretion of the officials. This might seem an odd exception, since calling the strike zone may be the type of officiating decision most subject to human foible. In research soon to be published in the journal Management Science, we studied umpires’ strike-zone calls using pitch-location data compiled by the high-speed cameras introduced by Major League Baseball several years ago in an effort to measure, monitor and reward umpires’ accuracy. After analyzing more than 700,000 pitches thrown during the 2008 and 2009 seasons, we found that umpires frequently made errors behind the plate — about 14 percent of non-swinging pitches were called erroneously. Some of those errors occurred in fairly predictable ways. We found, for example, that umpires tended to favor the home team by expanding the strike zone, calling a strike when the pitch was actually a ball 13.3 percent of the time for home team pitchers versus 12.7 percent of the time for visitors. Other errors were more surprising. Contrary to the expectation (or hope) that umpires would be more accurate in important situations, we found that they were, in fact, more likely to make mistakes when the game was on the line. For example, our analyses suggest that umpires were 13 percent more likely to miss an actual strike in the bottom of the ninth inning of a tie game than in the top of the first inning, on the first pitch. © 2014 The New York Times Company
Link ID: 19426 - Posted: 03.31.2014
Andreas von Bubnoff People who are unable to recognize faces can still learn to distinguish between other types of very similar objects, researchers report. The finding provides fresh support for the idea that the brain mechanisms that process face images are specialized for that task. It also offers evidence against an 'expertise' hypothesis, in which the same mechanisms are responsible for recognition of faces and other highly similar objects we have learned to tell apart — the way bird watchers can recognize birds after years of training. Constantin Rezlescu, a psychologist at Harvard University in Cambridge, Massachusetts, and his colleagues worked with two volunteers nicknamed Florence and Herschel, who had acquired prosopagnosia, or face blindness, following brain damage. The condition renders people unable to recognize and distinguish between faces — in some cases, even those of their own family members. The team trained Florence and Herschel to recognize greebles, computer-generated objects that differ from one another in similar ways to faces. The two volunteers spent eight training sessions of up to one hour learning to recognize 20 different greebles. The objects belonged to five different ‘families’ that were easier to distinguish between than were the individual greebles, and initially the participants took longer to discriminate individual greebles within the same family than they did the different families. But by the end of the training, they could tell individual greebles apart just as quickly — a sign that they had become experts in recognizing them, just as dog trainers can recognize individual dogs as easily as different breeds of dog. The study appears in Proceedings of the National Academy of Sciences1. © 2014 Nature Publishing Group
Link ID: 19408 - Posted: 03.25.2014
By BENOIT DENIZET-LEWISy The traffic was bad, even by the warped standards of a Southern California commute. We were headed south from Los Angeles to San Diego on an overcast morning last spring, but we hadn’t moved in 10 minutes. I was sandwiched in the back seat of the car between John Sylla and Denise Penn, two board members of the Los Angeles-based American Institute of Bisexuality (A.I.B.), a deep-pocketed group partly responsible for a surge of academic and scientific research across the country about bisexuality. We were on our way to an A.I.B. board meeting, where members would decide which studies to fund and also brainstorm ways to increase bisexual visibility “in a world that still isn’t convinced that bisexuality — particularly male bisexuality — exists,” as Allen Rosenthal, a sex researcher at Northwestern University, told me. When someone suggested that we try another route, Sylla, A.I.B.’s friendly and unassuming 55-year-old president, opened the maps app on his iPhone. I met Sylla the previous day at A.I.B. headquarters, a modest two-room office on the first floor of a quiet courtyard in West Hollywood that’s also home to film-production companies and a therapist’s office. Tall and pale, with an easy smile, Sylla offered me books from A.I.B.’s bisexual-themed bookshelf and marveled at the unlikelihood of his bisexual activism. “For the longest time, I didn’t even realize I was bi,” Sylla said. “When I did, I assumed I’d probably just live a supposedly straight life in the suburbs somewhere.” In the back seat, Sylla lifted his eyes from his phone and suggested an alternate course. Then he shrugged his shoulders. “We could go either way, really,” he told us. He smiled at me. “Get it? Either way?” © 2014 The New York Times Company
|By Nathan Collins A car detects when a driver starts to nod off and gently pulls over. A tablet or laptop senses its user is confused and offers assistance. Such interventions seem futuristic, but in fact they may not require any technological breakthroughs: a recent study suggests that with the aid of a standard camera, a simple computer program can learn to read people's eye movements to determine what they are doing and perhaps how they are feeling. Psychologists at the University of South Carolina were curious if a computer could figure out what a person was up to based on their eye movements. They first had 12 people engage in four tasks, including reading lines of text and searching photographs for a specific printed letter. Each person repeated the tasks 35 to 50 times while a camera recorded how their eyes moved. Using a subset of those data, the team trained a simple computer program, called a naive Bayes classifier, to identify which of the four tasks each person was doing. In the remaining trials, the classifier correctly determined which task the person was working on 75 percent of the time, well above the 25 percent expected by chance. Because the computer program is based on a flexible algorithm that is simple but powerful, this set-up could most likely be used to identify emotions or mental states such as confusion or fatigue, the researchers suggest in the paper, which appeared in September 2013 in PLOS ONE. With only a brief training period, a car's onboard computer—existing models are more than powerful enough—could learn how a driver's gaze changed as he or she became more exhausted. Further work, the authors suggest, could lead to devices capable of identifying and aiding people in need of assistance in a variety of situations. © 2014 Scientific American
By Neuroskeptic A neuroscience paper published before Christmas drew my eye with the expansive title: “How Thoughts Give Rise to Action“ Subtitled “Conscious Motor Intention Increases the Excitability of Target-Specific Motor Circuits”, the article’s abstract was no less bold, concluding that: These results indicate that conscious intentions govern motor function… until today, it was unclear whether conscious motor intention exists prior to movement, or whether the brain constructs such an intention after movement initiation. The authors, Zschorlich and Köhling of the University of Rostock, Germany, are weighing into a long-standing debate in philosophy, psychology, and neuroscience, concerning the role of consciousness in controlling our actions. To simplify, one school of thought holds that (at least some of the time), our intentions or plans control our actions. Many people would say that this is what common sense teaches us as well. But there’s an alternative view, in which our consciously-experienced intentions are not causes of our actions but are actually products of them, being generated after the action has already begun. This view is certainly counterintuitive, and many find it disturbing as it seems to undermine ‘free will’. That’s the background. Zschorlich and Köhling say that they’ve demonstrated that conscious intentions do exist, prior to motor actions, and that these intentions are accompanied by particular changes in brain activity. They claim to have done this using transcranial magnetic stimulation (TMS), a way of causing a localized modulation of brain electrical activity.
Link ID: 19370 - Posted: 03.17.2014
Linda Carroll TODAY contributor The stimulants used to treat ADHD might be making kids fat, a new study suggests. A study of more than 160,000 youngsters found that kids with Attention Deficit and Hyperactivity Disorder who received stimulants were at increased risk of becoming obese as they hit their teens. In contrast, kids with ADHD who took non-stimulant medications or got no therapy were very comparable, in terms of weight gain, to kids who didn’t have the disorder. “Our data suggest that stimulant use during childhood might have lifelong effects,” said Dr. Brian Schwartz, the study’s lead author and a professor of environmental health sciences, epidemiology, and medicine at the Johns Hopkins Bloomberg School of Public Health and senior investigator at the Geisinger Center for Health Research. “They might reset all sorts of physical properties and appetite parameters.” The new research may have uncovered a growing public health issue, Schwartz said. “Our data would seem to offer a lot of cause for concern with respect to prescribing stimulants,” he explained. Schwartz and his colleagues started the study because they were perplexed by the apparent paradox of hyperactive kids being prone to obesity. They scrutinized 12 years-worth of medical information from 163,820 Pennsylvania children, 13,427 of whom received an ADHD diagnosis.
Think women can’t do math? You’re wrong—but new research shows you might not change your mind, even if you get evidence to the contrary. A study of how both men and women perceive each other's mathematical ability finds that an unconscious bias against women could be skewing hiring decisions, widening the gender gap in mathematical professions like engineering. The inspiration for the experiment was a 2008 study published in Science that analyzed the results of a standardized test of math and verbal abilities taken by 15-year-olds around the world. The results challenged the pernicious stereotype that females are biologically inferior at mathematics. Although the female test-takers lagged behind males on the math portion of the test, the size of the gap closely tracked the degree of gender inequality in their countries, shrinking to nearly zero in emancipated countries like Sweden and Norway. That suggests that cultural biases rather than biology may be the better explanation for the math gender gap. To tease out the mechanism of discrimination, two of the authors of the 2008 study, Paola Sapienza and Luigi Zingales, economic researchers at Northwestern University’s Kellogg School of Management in Evanston, Illinois, and the University of Chicago Booth School of Business in Illinois, respectively, teamed up with Ernesto Reuben, an experimental psychologist at Columbia Business School in New York City, to design an experiment to test people's gender bias when it comes to judging mathematical ability. Study participants of both genders were divided into two groups: employers and job candidates. The job was simple: As accurately and quickly as possible, add up sets of two-digit numbers in a 4-minute math sprint. (The researchers did not tell the subjects, but it is already known that men and women perform equally well on this task.) © 2014 American Association for the Advancement of Science.
Think you’ll always pick chocolate over a bag of chips? Don’t be so sure. Researchers have found that if they can get people to pay more attention to a particular type of junk food, they will begin to prefer it—even weeks or months after the experiment. The finding suggests a new way to manipulate our decisions and perhaps even encourage us to pick healthy foods. “This paper is provocative and very well done,” says Antonio Rangel, a neuroeconomist at the California Institute of Technology in Pasadena, who was not involved in the new study. “It is exciting because it’s a proof of concept that a relatively simple intervention can have this long-lasting effect.” Economists who study decision-making had previously found that, when deciding between multiple items, people tend to let their gaze linger on the things that they end up choosing. This observation has motivated companies to pursue flashy packaging to attract consumers’ eyes. Tom Schonberg, a neuroscientist at the University of Texas, Austin, wondered whether people’s preferences could be changed before being faced with such a decision by training their brains to pay more attention to certain items. His first task was figuring out what kind of junk food people preferred. He and his colleagues recruited more than 200 university students and set up an auction-style program that asked them how much they were willing to pay for 60 different kinds of snacks, from M&M’s to Fritos. Then, the participants went through a 30- to 50-minute computer training program that showed photos of foods that the participants had already rated. When some treats appeared on the screen, a short tone would play and signal the subject to press a button as fast as possible. When other treats popped up, the computer remained silent and the subject refrained from pressing the button. © 2014 American Association for the Advancement of Science
|By Jason G. Goldman Most people don't spend much time pondering the diameter of their pupils. The fact is that we don't have much control over our pupils, the openings in the center of the irises that allow light into the eyes. Short of chemical interventions—such as the eyedrops ophthalmologists use to widen their patients' pupils for eye exams—the only way to dilate or shrink the pupils is by changing the amount of available light. Switch off the lamp, and your pupils will widen to take in more light. Step out into the sun, and your pupils will narrow. Mechanical though they may be, the workings of pupils are allowing researchers to explore the parallels between imagination and perception. In a recent series of experiments, University of Oslo cognitive neuroscientists Bruno Laeng and Unni Sulutvedt began by displaying triangles of varying brightness on a computer screen while monitoring the pupils of the study volunteers. The subjects' pupils widened for dark shapes and narrowed for bright ones, as expected. Next, participants were instructed to simply imagine the same triangles. Remarkably, their pupils constricted or dilated as if they had been staring at the actual shapes. Laeng and Sulutvedt saw the same pattern when they asked subjects to imagine more complex scenes, such as a sunny sky or a dark room. Imagination is usually thought of as “a private and subjective experience, which is not accompanied by strongly felt or visible physiological changes,” Laeng says. But the new findings, published in Psychological Science, challenge that idea. The study suggests that imagination and perception may rely on a similar set of neural processes: when you picture a dimly lit restaurant, your brain and body respond, at least to some degree, as if you were in that restaurant. © 2014 Scientific American
by Tom Siegfried Max Planck, who shook the world with his discovery of quantum physics, also offered a warning. “One must be careful,” he said, “when using the word, real.” It was good advice. As physicists explored the quantum domain, they found that usual ideas about reality did not apply. Reality in the realm of atoms was nothing like the world of rocks and baseballs and planets, where Newton’s laws of motions ruled with rigor. Among atoms, the rules were more like Olympic ice skating judging, with unpredictable scores. Gradually physicists, engineers and even screenwriters became familiar with quantum weirdness and used it in lasers, computers and movie plots. Quantum reality might be crazy, but it’s our reality, and most scientists, anyway, have become more or less used to it. Nevertheless, Planck’s warning still applies. Perhaps the quantum picture of reality is another illusion, just like Newton’s was. Human insight into nature may not yet have penetrated reality’s ultimate veil. In other words, maybe reality always dresses itself up in Newtonian or Einsteinian or quantum clothing, and science hasn’t yet seen what reality looks like naked. And that might explain why nature has been able to protect so many of its mysteries from science’s prying eyes — mysteries like the identity of dark matter, the math describing quantum gravity, the mechanism underlying consciousness. And whether humans have free will. Maybe reality always dresses itself up in Newtonian or Einsteinian or quantum clothing, and science hasn’t yet seen what reality looks like naked. © Society for Science & the Public 2000 - 2013.
Link ID: 19319 - Posted: 03.04.2014
By STEPHEN P. HINSHAW and RICHARD M. SCHEFFLER BERKELEY, Calif. — THE writing is on the chalkboard. Over the next few years, America can count on a major expansion of early childhood education. We embrace this trend, but as health policy researchers, we want to raise a major caveat: Unless we’re careful, today’s preschool bandwagon could lead straight to an epidemic of 4- and 5-year-olds wrongfully being told that they have attention deficit hyperactivity disorder. Introducing millions of 3- to 5-year-olds to classrooms and preacademic demands means that many more distracted kids will undoubtedly catch the attention of their teachers. Sure, many children this age are already in preschool, but making the movement universal and embedding transitional-K programs in public schools is bound to increase the pressure. We’re all for high standards, but danger lurks. The American Academy of Pediatrics now endorses the idea that the diagnosis of A.D.H.D. can and should begin at age 4, before problems accumulate. In fact, Adderall and other stimulants are approved for treatment of attentional issues in children as young as 3. Early intervention for children with A.D.H.D. could provide great relief. Children who go untreated have major difficulties in school and with their peers, and they have higher-than-normal rates of accidents and physical injuries. The problem is that millions of American children have been labeled with A.D.H.D. when they don’t truly have it. Our research has revealed a worrisome parallel between our nation’s increasing push for academic achievement and increased school accountability — and skyrocketing A.D.H.D. diagnoses, particularly for the nation’s poorest children. © 2014 The New York Times Company
by Helen Thomson People in a vegetative state showed signs of awareness after electric brain stimulation – and minimally conscious people were able to communicate again TALK about an awakening. People who have been in a minimally conscious state for weeks or years have been temporarily roused using mild electrical stimulation. Soon after it was applied to their brains, 15 people with severe brain damage showed signs of consciousness, including moving their hands or following instructions using their eyes. Two people were even able to answer questions for 2 hours before drifting back into their previous uncommunicative state. "I don't want to give people false hope – these people weren't getting up and walking around – but it shows there is potential for the brain to recover functionality, even several years after damage," says Steven Laureys at the University of Liège in Belgium, who led the research. People with severe brain trauma often fall into a coma. If they "awaken", by showing signs of arousal but not awareness, they are said to be in a vegetative state. This can improve to a state of minimal consciousness, where they might show fluctuating signs of awareness, which come and go, but have no ability to communicate. External stimulation of the brain has been shown to increase arousal, awareness and aspects of cognition in healthy people. So Laureys and his colleagues wondered if it would do the same in people with severe brain damage. They used transcranial direct current stimulation (tDCS), which doesn't directly excite the brain, but uses low-level electrical stimulation to make neurons more or less likely to fire. © Copyright Reed Business Information Ltd.
Link ID: 19301 - Posted: 02.27.2014
by Tom Siegfried If freedom is just another word for nothing left to lose, then “free will” is just another phrase for ability to choose. Bad, wasn’t it? But if free will is an illusion, as many scientists and philosophers have argued, then you shouldn’t blame me. On the other hand, I do blame myself. Because like most bloggers, and possibly even the several dozen humans who don’t blog, I think I decided for myself what to write. Besides, as many investigators of this issue have pointed out, it’s not so obvious that free will is illusory now that quantum mechanics has inserted some randomness into nature. Sadly, though, that reasoning doesn’t get you very far. There’s randomness in the quantum world, all right, just like the unpredictable sequence of winning numbers on a roulette wheel. But in the long run all the numbers turn up about equally often. Free will isn’t worth much if you can’t use it to beat a casino. And as MIT physicist Scott Aaronson points out, quantum math is similar: It gives the odds about what various possible things will happen, and those odds are always predicted precisely. The probability distribution of results is always just what the quantum math says it will be. Aaronson doesn’t see any free will there. Still, the free will question has elicited some sophisticated musing from quantum physicists who like to contemplate the interface of mentality and physical reality. It seems reasonable enough to reexamine such an old question in the light of the latest understanding of the universe. It may be that modern physics can offer a perspective giving hope for those who like to make up their own mind. © Society for Science & the Public 2000 - 2013.
Link ID: 19299 - Posted: 02.27.2014
Sara Reardon Two monkeys sit at computer screens, eyeing one another as they wait for a promised reward: apple juice. Each has a choice — it can either select a symbol that results in juice being shared equally, or pick one that delivers most of the juice to itself. But being selfish is risky. If its partner also chooses not to share, neither gets much juice. This game, the ‘prisoner’s dilemma’, is a classic test of strategy that involves the simultaneous evaluation of an opponent’s thinking. Researchers have now discovered — and manipulated — specific brain circuits in rhesus macaques (Macaca mulatta) that seem to be involved in the animals’ choices, and in their assessments of their partners’ choices. Investigating the connections could shed light on how social context affects decision-making in humans, and how disorders that affect social skills, such as autism spectrum disorder, disrupt brain circuitry. “Once we have identified that there are particular neural signals necessary to drive the processes, we can begin to tinker,” says Michael Platt, a neurobiologist at Duke University in Durham, North Carolina. Neurobiologists Keren Haroush and Ziv Williams of Harvard Medical School in Boston, Massachusetts, zoomed in on neural circuits in rhesus macaques by implanting electrode arrays into a brain area called the dorsal anterior cingulate cortex (dACC), which is associated with rewards and decision-making. The arrays recorded the activity of hundreds of individual neurons. When the monkeys played the prisoner’s dilemma (see ‘A juicy experiment’) against a computer program, they rarely chose to cooperate. But when they played with another monkey that they could see, they were several times more likely to choose to share the juice. © 2014 Nature Publishing Group
|By Lila Stanners Beauty seems mysterious and subjective. Scientists have long attempted to explain why the same object can strike some individuals as breathtaking and others as repulsive. Now a study finds that applying stimulation to a certain brain area enhances people's aesthetic appreciation of visual images. First, participants viewed 70 abstract paintings and sketches and 80 representational (realistic) paintings and photographs and rated how much they liked each one. Then they rated a similar set of images after receiving transcranial direct-current stimulation or sham stimulation. Transcranial direct-current stimulation sends small electrical impulses to the brain through electrodes attached to the head. The technique is noninvasive and cannot be felt, so subjects in the trials were not aware when they received real stimulation. The researchers aimed the impulses at the left dorsolateral prefrontal cortex, an area just behind the brow that is known to be a region critical for emotional processing. They found that the stimulation increased participants' appreciation of representational images, according to the study published online in October 2013 inSocial Cognitive and Affective Neuroscience. The scientists believe the stimulation facilitated a shift from object recognition to aesthetic appraisal for the figurative images; the abstract art was probably being processed by a different area of the brain. This study is one of many recent successful attempts at subtly altering cognition with noninvasive brain stimulation. Some experiments have found that stimulating certain areas allows people to solve math problems or puzzles that formerly had them stumped. Other work suggests these techniques can enhance motor learning, helping athletes or musicians improve at a new sport or a new instrument more rapidly. Experts are quick to point out, however, that these effects are modest enhancements at best—thought induction remains firmly in the realm of science fiction. © 2014 Scientific American