Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Kevin Loria Music was among the least of Mr. B's concerns. As a 59-year-old Dutch man living with extremely severe obsessive compulsive disorder for 46 years, he had other things on his mind. His OCD was so severe it led to moderate anxiety and mild depression. Not only was his condition extreme, but it was also resistant to traditional treatment. It got so bad that he opted to receive an implant to stimulate his brain constantly with electricity — a treatment, called deep brain stimulation (DBS), that has been shown to successfully treat OCD in the past. It worked, but had a very peculiar side effect. As researchers write in a study published in the journal Frontiers in Behavioral Neuroscience, it turned Mr. B. into a Johnny Cash fanatic, though he'd never really listened to The Man in Black before. Mr. B. had listened to the same music for decades, but was never a devout music lover. He was a Rolling Stones and Beatles fan (with a preference for the Stones), and listened to Dutch music as well. But just months after flying to Minneapolis and having two sets of electrodes tunneled into his brain for the shock therapy, he had a mind-blowing run-in with the song "Ring of Fire" playing on the radio. Something about Cash's deep bass-baritone voice resonated with him at that moment. His life had already changed. After the surgical implants and therapy, his OCD had gone from extremely severe to mild, and his depression and anxiety were at a level lower than mild. But when he heard Cash croon, another change began. Mr. B. bought all the Johnny Cash music he could find and stopped listening to anything else — no more Beatles, no more Stones, no more Nederpop. Instead, he played Cash all the time, and especially loved the songs from the '70s and '80s. "Folsom Prison Blues," "Ring Of Fire," and "Sunday Morning Come-Down" are his favorites. They make him feel like a hero, he told doctors. © 2014 Business Insider, Inc.
After a string of scandals involving accusations of misconduct and retracted papers, social psychology is engaged in intense self-examination—and the process is turning out to be painful. This week, a global network of nearly 100 researchers unveiled the results of an effort to replicate 27 well-known studies in the field. In more than half of the cases, the result was a partial or complete failure. As the replicators see it, the failed do-overs are a healthy corrective. “Replication helps us make sure what we think is true really is true,” says Brent Donnellan, a psychologist at Michigan State University in East Lansing who has undertaken three recent replications of studies from other groups—all of which came out negative. “We are moving forward as a science,” he says. But rather than a renaissance, some researchers on the receiving end of this organized replication effort see an inquisition. “I feel like a criminal suspect who has no right to a defense and there is no way to win,” says psychologist Simone Schnall of the University of Cambridge in the United Kingdom, who studies embodied cognition, the idea that the mind is unconsciously shaped by bodily movement and the surrounding environment. Schnall’s 2008 study finding that hand-washing reduced the severity of moral judgment was one of those Donnellan could not replicate. About half of the replications are the work of Many Labs, a network of about 50 psychologists around the world. The results of their first 13 replications, released online in November, were greeted with a collective sigh of relief: Only two failed. Meanwhile, Many Labs participant Brian Nosek, a psychologist at the University of Virginia in Charlottesville, put out a call for proposals for more replication studies. After 40 rolled in, he and Daniël Lakens, a psychologist at Eindhoven University of Technology in the Netherlands, chose another 14 to repeat. © 2014 American Association for the Advancement of Science.
|By Isaac Bédard Very few animals have revealed an ability to consciously think about the future—behaviors such as storing food for the winter are often viewed as a function of instinct. Now a team of anthropologists at the University of Zurich has evidence that wild orangutans have the capacity to perceive the future, prepare for it and communicate those future plans to other orangutans. The researchers observed 15 dominant male orangutans in Sumatra for several years. These males roam through immense swaths of dense jungle, emitting loud yells every couple of hours so that the females they mate with and protect can locate and follow them. The shouts also warn away any lesser males that might be in the vicinity. These vocalizations had been observed by primatologists before, but the new data reveal that the apes' last daily call, an especially long howl, is aimed in the direction they will travel in the morning—and the other apes take note. The females stop moving when they hear this special 80-second call, bed down for the night, and in the morning begin traveling in the direction indicated the evening before. The scientists believe that the dominant apes are planning their route in advance and communicating it to other orangutans in the area. They acknowledge, however, that the dominant males might not intend their long calls to have such an effect on their followers. Karin Isler, a Zurich anthropologist who co-authored the study in PLOS ONE last fall, explains, “We don't know whether the apes are conscious. This planning does not have to be conscious. But it is also more and more difficult to argue that they [do not have] some sort of mind of their own.” © 2014 Scientific American
By BENEDICT CAREY SAN DIEGO – The last match of the tournament had all the elements of a classic showdown, pitting style versus stealth, quickness versus deliberation, and the world’s foremost card virtuoso against its premier numbers wizard. If not quite Ali-Frazier or Williams-Sharapova, the duel was all the audience of about 100 could ask for. They had come to the first Extreme Memory Tournament, or XMT, to see a fast-paced, digitally enhanced memory contest, and that’s what they got. The contest, an unusual collaboration between industry and academic scientists, featured one-minute matches between 16 world-class “memory athletes” from all over the world as they met in a World Cup-like elimination format. The grand prize was $20,000; the potential scientific payoff was large, too. One of the tournament’s sponsors, the company Dart NeuroScience, is working to develop drugs for improved cognition. The other, Washington University in St. Louis, sent a research team with a battery of cognitive tests to determine what, if anything, sets memory athletes apart. Previous research was sparse and inconclusive. Yet as the two finalists, both Germans, prepared to face off — Simon Reinhard, 35, a lawyer who holds the world record in card memorization (a deck in 21.19 seconds), and Johannes Mallow, 32, a teacher with the record for memorizing digits (501 in five minutes) — the Washington group had one preliminary finding that wasn’t obvious. “We found that one of the biggest differences between memory athletes and the rest of us,” said Henry L. Roediger III, the psychologist who led the research team, “is in a cognitive ability that’s not a direct measure of memory at all but of attention.” People have been performing feats of memory for ages, scrolling out pi to hundreds of digits, or phenomenally long verses, or word pairs. Most store the studied material in a so-called memory palace, associating the numbers, words or cards with specific images they have already memorized; then they mentally place the associated pairs in a familiar location, like the rooms of a childhood home or the stops on a subway line. The Greek poet Simonides of Ceos is credited with first describing the method, in the fifth century B.C., and it has been vividly described in popular books, most recently “Moonwalking With Einstein,” by Joshua Foer. © 2014 The New York Times Company
By NATASHA SINGER Joseph J. Atick cased the floor of the Ronald Reagan Building and International Trade Center in Washington as if he owned the place. In a way, he did. He was one of the organizers of the event, a conference and trade show for the biometrics security industry. Perhaps more to the point, a number of the wares on display, like an airport face-scanning checkpoint, could trace their lineage to his work. A physicist, Dr. Atick is one of the pioneer entrepreneurs of modern face recognition. Having helped advance the fundamental face-matching technology in the 1990s, he went into business and promoted the systems to government agencies looking to identify criminals or prevent identity fraud. “We saved lives,” he said during the conference in mid-March. “We have solved crimes.” Thanks in part to his boosterism, the global business of biometrics — using people’s unique physiological characteristics, like their fingerprint ridges and facial features, to learn or confirm their identity — is booming. It generated an estimated $7.2 billion in 2012, according to reports by Frost & Sullivan. Making his rounds at the trade show, Dr. Atick, a short, trim man with an indeterminate Mediterranean accent, warmly greeted industry representatives at their exhibition booths. Once he was safely out of earshot, however, he worried aloud about what he was seeing. What were those companies’ policies for retaining and reusing consumers’ facial data? Could they identify individuals without their explicit consent? Were they running face-matching queries for government agencies on the side? Now an industry consultant, Dr. Atick finds himself in a delicate position. While promoting and profiting from an industry that he helped foster, he also feels compelled to caution against its unfettered proliferation. He isn’t so much concerned about government agencies that use face recognition openly for specific purposes — for example, the many state motor vehicle departments that scan drivers’ faces as a way to prevent license duplications and fraud. Rather, what troubles him is the potential exploitation of face recognition to identify ordinary and unwitting citizens as they go about their lives in public. Online, we are all tracked. But to Dr. Atick, the street remains a haven, and he frets that he may have abetted a technology that could upend the social order. © 2014 The New York Times Company
Link ID: 19630 - Posted: 05.18.2014
By ALAN SCHWARZ ATLANTA — More than 10,000 American toddlers 2 or 3 years old are being medicated for attention deficit hyperactivity disorder outside established pediatric guidelines, according to data presented on Friday by an official at the Centers for Disease Control and Prevention. The report, which found that toddlers covered by Medicaid are particularly prone to be put on medication such as Ritalin and Adderall, is among the first efforts to gauge the diagnosis of A.D.H.D. in children below age 4. Doctors at the Georgia Mental Health Forum at the Carter Center in Atlanta, where the data was presented, as well as several outside experts strongly criticized the use of medication in so many children that young. The American Academy of Pediatrics standard practice guidelines for A.D.H.D. do not even address the diagnosis in children 3 and younger — let alone the use of such stimulant medications, because their safety and effectiveness have barely been explored in that age group. “It’s absolutely shocking, and it shouldn’t be happening,” said Anita Zervigon-Hakes, a children’s mental health consultant to the Carter Center. “People are just feeling around in the dark. We obviously don’t have our act together for little children.” Dr. Lawrence H. Diller, a behavioral pediatrician in Walnut Creek, Calif., said in a telephone interview: “People prescribing to 2-year-olds are just winging it. It is outside the standard of care, and they should be subject to malpractice if something goes wrong with a kid.” Friday’s report was the latest to raise concerns about A.D.H.D. diagnoses and medications for American children beyond what many experts consider medically justified. Last year, a nationwide C.D.C. survey found that 11 percent of children ages 4 to 17 have received a diagnosis of the disorder, and that about one in five boys will get one during childhood. A vast majority are put on medications such as methylphenidate (commonly known as Ritalin) or amphetamines like Adderall, which often calm a child’s hyperactivity and impulsivity but also carry risks for growth suppression, insomnia and hallucinations. Only Adderall is approved by the Food and Drug Administration for children below age 6. However, because off-label use of methylphenidate in preschool children had produced some encouraging results, the most recent American Academy of Pediatrics guidelines authorized it in 4- and 5-year-olds — but only after formal training for parents and teachers to improve the child’s environment were unsuccessful. © 2014 The New York Times Company
|By Andrea Anderson Our knack for language helps us structure our thinking. Yet the ability to wax poetic about trinkets, tools or traits may not be necessary to think about them abstractly, as was once suspected. A growing body of evidence suggests nonhuman animals can group living and inanimate things based on less than obvious shared traits, raising questions about how creatures accomplish this task. In a study published last fall in the journal PeerJ, for example, Oakland University psychology researcher Jennifer Vonk investigated how well four orangutans and a western lowland gorilla from the Toronto Zoo could pair photographs of animals from the same biological groups. Vonk presented the apes with a touch-screen computer and got them to tap an image of an animal—for instance, a snake—on the screen. Then she showed each ape two side-by-side animal pictures: one from the same category as the animal in the original image and one from another—for example, images of a different reptile and a bird. When they correctly matched animal pairs, they received a treat such as nuts or dried fruit. When they got it wrong, they saw a black screen before beginning the next trial. After hundreds of such trials, Vonk found that all five apes could categorize other animals better than expected by chance (although some individuals were better at it than others). The researchers were impressed that the apes could learn to classify mammals of vastly different visual characteristics together—such as turtles and snakes—suggesting the apes had developed concepts for reptiles and other categories of animals based on something other than shared physical traits. Dogs, too, seem to have better than expected abstract-thinking abilities. They can reliably recognize pictures of other dogs, regardless of breed, as a study in the July 2013 Animal Cognition showed. The results surprised scientists not only because dog breeds vary so widely in appearance but also because it had been unclear whether dogs could routinely identify fellow canines without the advantage of smell and other senses. Other studies have found feats of categorization by chimpanzees, bears and pigeons, adding up to a spate of recent research that suggests the ability to sort things abstractly is far more widespread than previously thought. © 2014 Scientific American
By DANIEL GOLEMAN Which will it be — the berries or the chocolate dessert? Homework or the Xbox? Finish that memo, or roam Facebook? Such quotidian decisions test a mental ability called cognitive control, the capacity to maintain focus on an important choice while ignoring other impulses. Poor planning, wandering attention and trouble inhibiting impulses all signify lapses in cognitive control. Now a growing stream of research suggests that strengthening this mental muscle, usually with exercises in so-called mindfulness, may help children and adults cope with attention deficit hyperactivity disorder and its adult equivalent, attention deficit disorder. The studies come amid growing disenchantment with the first-line treatment for these conditions: drugs. In 2007, researchers at the University of California, Los Angeles, published a study finding that the incidence of A.D.H.D. among teenagers in Finland, along with difficulties in cognitive functioning and related emotional disorders like depression, were virtually identical to rates among teenagers in the United States. The real difference? Most adolescents with A.D.H.D. in the United States were taking medication; most in Finland were not. “It raises questions about using medication as a first line of treatment,” said Susan Smalley, a behavior geneticist at U.C.L.A. and the lead author. In a large study published last year in The Journal of the American Academy of Child & Adolescent Psychiatry, researchers reported that while most young people with A.D.H.D. benefit from medications in the first year, these effects generally wane by the third year, if not sooner. “There are no long-term, lasting benefits from taking A.D.H.D. medications,” said James M. Swanson, a psychologist at the University of California, Irvine, and an author of the study. “But mindfulness seems to be training the same areas of the brain that have reduced activity in A.D.H.D.” © 2014 The New York Times Company
By DOLLY CHUGH, KATHERINE L. MILKMAN and MODUPE AKINOLA IN the world of higher education, we professors like to believe that we are free from the racial and gender biases that afflict so many other people in society. But is this self-conception accurate? To find out, we conducted an experiment. A few years ago, we sent emails to more than 6,500 randomly selected professors from 259 American universities. Each email was from a (fictional) prospective out-of-town student whom the professor did not know, expressing interest in the professor’s Ph.D. program and seeking guidance. These emails were identical and written in impeccable English, varying only in the name of the student sender. The messages came from students with names like Meredith Roberts, Lamar Washington, Juanita Martinez, Raj Singh and Chang Huang, names that earlier research participants consistently perceived as belonging to either a white, black, Hispanic, Indian or Chinese student. In total, we used 20 different names in 10 different race-gender categories (e.g. white male, Hispanic female). On a Monday morning, the emails went out — one email per professor — and then we waited to see which professors would write back to which students. We understood, of course, that some professors would naturally be unavailable or uninterested in mentoring. But we also knew that the average treatment of any particular type of student should not differ from that of any other — unless professors were deciding (consciously or not) which students to help on the basis of their race and gender. (This “audit” methodology has long been used to study intentional and unintentional bias in real-world decision-making, as it allows researchers to standardize much about the decision environment.) What did we discover? First comes the fairly good news, which we reported in a paper in Psychological Science. Despite not knowing the students, 67 percent of the faculty members responded to the emails, and remarkably, 59 percent of the responders even agreed to meet on the proposed date with a student about whom they knew little and who did not even attend their university. (We immediately wrote back to cancel those meetings.) © 2014 The New York Times Company
by Helen Thomson If you liked Inception, you're going to love this. People have been given the ability to control their dreams after a quick zap to their head while they sleep. Lucid dreaming is an intriguing state of sleep in which a person becomes aware that they are dreaming. As a result, they gain some element of control over what happens in their dream – for example, the dreamer could make a threatening character disappear or decide to fly to an exotic location. Researchers are interested in lucid dreaming because it can help probe what happens when we switch between conscious states, going from little to full awareness. In 2010, Ursula Voss at the J.W. Goethe University in Frankfurt, Germany, and her colleagues trained volunteers to move their eyes in a specific pattern during a lucid dream. By scanning their brains while they slept, Voss was able to show that lucid dreams coincided with elevated gamma brainwaves. This kind of brainwave occurs when groups of neurons synchronise their activity, firing together about 40 times a second. The gamma waves occurred mainly in areas situated towards the front of the brain, called the frontal and temporal lobes. Perchance to dream The team wanted to see whether gamma brainwaves caused the lucid dreams, or whether both were side effects of some other change. So Voss and her colleagues began another study in which they stimulated the brain of 27 sleeping volunteers, using a non-invasive technique called transcranial alternating current. © Copyright Reed Business Information Ltd.
Sam Kean For most of recorded history, human beings situated the mind — and by extension the soul — not within the brain but within the heart. When preparing mummies for the afterlife, for instance, ancient Egyptian priests removed the heart in one piece and preserved it in a ceremonial jar; in contrast, they scraped out the brain through the nostrils with iron hooks, tossed it aside for animals, and filled the empty skull with sawdust or resin. (This wasn’t a snarky commentary on their politicians, either—they considered everyone’s brain useless.) Most Greek thinkers also elevated the heart to the body’s summa. Aristotle pointed out that the heart had thick vessels to shunt messages around, whereas the brain had wispy, effete wires. The heart furthermore sat in the body’s center, appropriate for a commander, while the brain sat in exile up top. The heart developed first in embryos, and it responded in sync with our emotions, pounding faster or slower, while the brain just sort of sat there. Ergo, the heart must house our highest faculties. Meanwhile, though, some physicians had always had a different perspective on where the mind came from. They’d simply seen too many patients get beaned in the head and lose some higher faculty to think it all a coincidence. Doctors therefore began to promote a brain-centric view of human nature. And despite some heated debates over the centuries—especially about whether the brain had specialized regions or not—by the 1600s most learned men had enthroned the mind within the brain. A few brave scientists even began to search for that anatomical El Dorado: the exact seat of the soul within the brain. One such explorer was Swedish philosopher Emanuel Swedenborg, one of the oddest ducks to ever waddle across the stage of history. © 2014 Salon Media Group, Inc.
Link ID: 19601 - Posted: 05.12.2014
—By Indre Viskontas and Chris Mooney When the audio of Los Angeles Clippers owner Donald Sterling telling a female friend not to "bring black people" to his team's games hit the internet, the condemnations were immediate. It was clear to all that Sterling was a racist, and the punishment was swift: The NBA banned him for life. It was, you might say, a pretty straightforward case. When you take a look at the emerging science of what motivates people to behave in a racist or prejudiced way, though, matters quickly grow complicated. In fact, if there's one cornerstone finding when it comes to the psychological underpinnings of prejudice, it's that out-and-out or "explicit" racists—like Sterling—are just one part of the story. Perhaps far more common are cases of so-called "implicit" prejudice, where people harbor subconscious biases, of which they may not even be aware, but that come out in controlled psychology experiments. Much of the time, these are not the sort of people whom we would normally think of as racists. "They might say they think it's wrong to be prejudiced," explains New York University neuroscientist David Amodio, an expert on the psychology of intergroup bias. Amodio says that white participants in his studies "might write down on a questionnaire that they are positive in their attitudes towards black people…but when you give them a behavioral measure, of how they respond to pictures of black people, compared with white people, that's when we start to see the effects come out." You can listen to our interview with Amodio on the Inquiring Minds podcast below: Welcome to the world of implicit racial biases, which research suggests are all around us, and which can be very difficult for even the most well-intentioned person to control. ©2014 Mother Jones
By Diana Kwon Would you rather have $50 now or $100 two weeks from now? Even though the $100 is obviously the better choice, many people will opt for the $50. Both humans and animals show this tendency to place lower value on later rewards, a behavior known as temporal discounting. High rates of temporal discounting can lead to impulsive behavior, and at its worst, too much of this “now bias” is associated with pathological gambling, attention deficit hyperactivity disorder and drug addiction. What determines if you’ll be an impulsive decision-maker? New evidence suggests that for women, estrogen levels might be a factor. In a recent study published in the Journal of Neuroscience, Charlotte Boettiger and her team at the University of North Carolina revealed that greater increases in estrogen levels across the menstrual cycle led to less impulsive decision making. The researchers tested the “now bias” in 87 women between the ages of 18 and 40 at two different points in their menstrual cycle – in the menstrual phase when estrogen levels are low and the follicular phase when estrogen levels are high. Participants were given a delay-discounting task where they had to choose between two options: a certain sum of money at a later date or a discounted amount immediately (e.g. $100 in one week or $70 today). Subjects showed a greater bias toward the immediate choice during the menstrual phase of the cycle, when estrogen levels were low. Estrogen levels vary between women and can change with factors like stress and age. When the researchers measured amounts of estradiol (the dominant form of estrogen) from the saliva in a subset of the participants at the two points in their menstrual cycles, they found that not all of them showed a detectable increase. Only those with a measureable rise in estradiol showed a significant change in impulsive decision-making. © 2014 Scientific American
By Scott Barry Kaufman The latest neuroscience of aesthetics suggests that the experience of visual, musical, and moral beauty all recruit the same part of the “emotional brain”: field A1 of the medial orbitofrontal cortex (mOFC). But what about mathematics? Plato believed that mathematical beauty was the highest form of beauty since it is derived from the intellect alone and is concerned with universal truths. Similarly, the art critic Clive Bell noted: “Art transports us from the world of man’s activity to a world of aesthetic exaltation. For a moment we are shut off from human interests; our anticipations and memories are arrested; we are lifted above the stream of life. The pure mathematician rapt in his studies knows a state of mind which I take to be similar, if not identical. He feels an emotion for his speculations which arises from no perceived relation between them and the lives of men, but springs, inhuman or super-human, from the heart of an abstract science. I wonder, sometimes, whether the appreciators of art and of mathematical solutions are not even more closely allied.” A new study suggests that Bell might be right. Semir Zeki and colleagues recruited 16 mathematicians at the postgraduate or postdoctoral level as well as 12 non-mathematicians. All participants viewed a series of mathematical equations in the fMRI scanner and were asked to rate the beauty of the equations as well as their understanding of each equation. After they were out of the scanner, they filled out a questionnaire in which they reported their level of understanding of each equation as well as their emotional experience viewing the equations. © 2014 Scientific American
Link ID: 19586 - Posted: 05.08.2014
By Felicity Muth Imagine that you walk into a room, where three people are sitting, facing you. Their faces are oriented towards you, but all three of them have their eyes directed towards the left side of the room. You would probably follow their gaze to the point where they were looking (if you weren’t too unnerved to take your eyes off these odd people). As a social species, we are particularly cued in to social cues like following others’ gazes. However, we’re not the only animals that follow the gazes of members of our species: great apes, monkeys, lemurs, dogs, goats, birds and even tortoises follow each other’s gazes too. However, we don’t all follow gazes to the same extent. One species of macaque monkey (the stumptailed macaque) follows gazes a lot more than other macaque species, bonobos do it more than chimpanzees and human children follow gazes a lot more than other great ape species do. Species also differ in their understanding of what the other animal is looking at. For example, if we saw a person gazing at a point, and between them and this point was a barrier, whether the barrier was solid or transparent would affect how far we followed their gaze. This is because we imagine ourselves in their physical position and what they might be able to see. Bonobos and chimpanzees can also do this, but not the orang-utan. Like us, great apes and old world monkeys also will follow a gaze, but then look back at the individual gazing if they don’t see what the individual is gazing at (‘are you going crazy or am I just not seeing what you’re seeing?’). Capuchin and spider monkeys don’t seem to do this. So, even though a lot of animals are capable of following the gazes of others, there is a lot of variation in the extent and flexibility of this behaviour. A recent study looked to see whether chimpanzees, bonobos, orang-utans and humans would be more likely to follow their own species’ gazes than another species. © 2014 Scientific American
Brian Owens If you think you know what you just said, think again. People can be tricked into believing they have just said something they did not, researchers report this week. The dominant model of how speech works is that it is planned in advance — speakers begin with a conscious idea of exactly what they are going to say. But some researchers think that speech is not entirely planned, and that people know what they are saying in part through hearing themselves speak. So cognitive scientist Andreas Lind and his colleagues at Lund University in Sweden wanted to see what would happen if someone said one word, but heard themselves saying another. “If we use auditory feedback to compare what we say with a well-specified intention, then any mismatch should be quickly detected,” he says. “But if the feedback is instead a powerful factor in a dynamic, interpretative process, then the manipulation could go undetected.” In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said. Lind chose the words ‘grey’ and ‘green’ (grå and grön in Swedish) to switch, as they sound similar but have different meanings. © 2014 Nature Publishing Group
by Bethany Brookshire When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner. Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing. In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says. © Society for Science & the Public 2000 - 2013.
Intelligence is hard to test, but one aspect of being smart is self-control, and a version of the old shell game that works for many species suggests that brain size is very important. When it comes to animal intelligence, says Evan MacLean, co-director of Duke University’s Canine Cognition Center, don’t ask which species is smarter. “Smarter at what?” is the right question. Many different tasks, requiring many different abilities, are given to animals to measure cognition. And narrowing the question takes on particular importance when the comparisons are across species. So Dr. MacLean, Brian Hare and Charles Nunn, also Duke scientists who study animal cognition, organized a worldwide effort by 58 scientists to test 36 species on a single ability: self-control. This capacity is thought to be part of thinking because it enables animals to override a strong, nonthinking impulse, and to solve a problem that requires some analysis of the situation in front of them. The testing program, which took several international meetings to arrange, and about seven years to complete, looked at two common tasks that are accepted ways to judge self-control. It then tried to correlate how well the animals did on the tests with other measures, like brain size, diet and the size of their normal social groups. Unsurprisingly, the great apes did very well. Dogs and baboons did pretty well. And squirrel monkeys, marmosets and some birds were among the worst performers. Surprisingly, absolute brain size turned out to be a much better predictor of success than relative brain size, which has been thought to be a good indication of intelligence. Social group size was not significant, but variety of diet was. The paper, published last week in the journal Proceedings of the National Academy of Sciences, is accompanied online by videos showing the animals doing what looks for all the world like the shell game in which a player has to guess where the pea is. © 2014 The New York Times Company
|By Christof Koch Quantum physicist Wolfgang Pauli expressed disdain for sloppy, nonsensical theories by denigrating them as “not even wrong,” meaning they were just empty conjectures that could be quickly dismissed. Unfortunately, many remarkably popular theories of consciousness are of this ilk—the idea, for instance, that our experiences can somehow be explained by the quantum theory that Pauli himself helped to formulate in the early 20th century. An even more far-fetched idea holds that consciousness emerged only a few thousand years ago, when humans realized that the voices in their head came not from the gods but from their own internal spoken narratives. Not every theory of consciousness, however, can be dismissed as just so much intellectual flapdoodle. During the past several decades, two distinct frameworks for explaining what consciousness is and how the brain produces it have emerged, each compelling in its own way. Each framework seeks to explain a vast storehouse of observations from both neurological patients and sophisticated laboratory experiments. One of these—the Integrated Information Theory—devised by psychiatrist and neuroscientist Giulio Tononi, which I have described before in these pages [see “Ubiquitous Minds”; Scientific American Mind, January/February 2014], uses a mathematical expression to represent conscious experience and then derives predictions about which circuits in the brain are essential to produce these experiences. [Full disclosure: I have worked with Tononi on this theory.] In contrast, the Global Workspace Model of consciousness moves in the opposite direction. Its starting point is behavioral experiments that manipulate conscious experience of people in a very controlled setting. It then seeks to identify the areas of the brain that underlie these experiences. © 2014 Scientific American
Link ID: 19549 - Posted: 04.29.2014
By LAURENCE STEINBERG I’M not sure whether it’s a badge of honor or a mark of shame, but a paper I published a few years ago is now ranked No. 8 on a list of studies that other psychologists would most like to see replicated. Good news: People find the research interesting. Bad news: They don’t believe it. The paper in question, written with my former student Margo Gardner, appeared in the journal Developmental Psychology in July 2005. It described a study in which we randomly assigned subjects to play a video driving game, either alone or with two same-age friends watching them. The mere presence of peers made teenagers take more risks and crash more often, but no such effect was observed among adults. I find my colleagues’ skepticism surprising. Most people recall that as teenagers, they did far more reckless things when with their friends than when alone. Data from the Federal Bureau of Investigation indicate that many more juvenile crimes than adult crimes are committed in groups. And driving statistics conclusively show that having same-age passengers in the car substantially increases the risk of a teen driver’s crashing but has no similar impact when an adult is behind the wheel. Then again, I’m aware that our study challenged many psychologists’ beliefs about the nature of peer pressure, for it showed that the influence of peers on adolescent risk taking doesn’t rely solely on explicit encouragement to behave recklessly. Our findings also undercut the popular idea that the higher rate of real-world risk taking in adolescent peer groups is a result of reckless teenagers’ being more likely to surround themselves with like-minded others. My colleagues and I have replicated our original study of peer influences on adolescent risk taking several times since 2005. We have also shown that the reason teenagers take more chances when their peers are around is partly because of the impact of peers on the adolescent brain’s sensitivity to rewards. In a study of people playing our driving game, my colleague Jason Chein and I found that when teens were with people their own age, their brains’ reward centers became hyperactivated, which made them more easily aroused by the prospect of a potentially pleasurable experience. This, in turn, inclined teenagers to pay more attention to the possible benefits of a risky choice than to the likely costs, and to make risky decisions rather than play it safe. Peers had no such effect on adults’ reward centers, though. © 2014 The New York Times Company