Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Brian Owens If you think you know what you just said, think again. People can be tricked into believing they have just said something they did not, researchers report this week. The dominant model of how speech works is that it is planned in advance — speakers begin with a conscious idea of exactly what they are going to say. But some researchers think that speech is not entirely planned, and that people know what they are saying in part through hearing themselves speak. So cognitive scientist Andreas Lind and his colleagues at Lund University in Sweden wanted to see what would happen if someone said one word, but heard themselves saying another. “If we use auditory feedback to compare what we say with a well-specified intention, then any mismatch should be quickly detected,” he says. “But if the feedback is instead a powerful factor in a dynamic, interpretative process, then the manipulation could go undetected.” In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said. Lind chose the words ‘grey’ and ‘green’ (grå and grön in Swedish) to switch, as they sound similar but have different meanings. © 2014 Nature Publishing Group
by Bethany Brookshire When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner. Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing. In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says. © Society for Science & the Public 2000 - 2013.
Intelligence is hard to test, but one aspect of being smart is self-control, and a version of the old shell game that works for many species suggests that brain size is very important. When it comes to animal intelligence, says Evan MacLean, co-director of Duke University’s Canine Cognition Center, don’t ask which species is smarter. “Smarter at what?” is the right question. Many different tasks, requiring many different abilities, are given to animals to measure cognition. And narrowing the question takes on particular importance when the comparisons are across species. So Dr. MacLean, Brian Hare and Charles Nunn, also Duke scientists who study animal cognition, organized a worldwide effort by 58 scientists to test 36 species on a single ability: self-control. This capacity is thought to be part of thinking because it enables animals to override a strong, nonthinking impulse, and to solve a problem that requires some analysis of the situation in front of them. The testing program, which took several international meetings to arrange, and about seven years to complete, looked at two common tasks that are accepted ways to judge self-control. It then tried to correlate how well the animals did on the tests with other measures, like brain size, diet and the size of their normal social groups. Unsurprisingly, the great apes did very well. Dogs and baboons did pretty well. And squirrel monkeys, marmosets and some birds were among the worst performers. Surprisingly, absolute brain size turned out to be a much better predictor of success than relative brain size, which has been thought to be a good indication of intelligence. Social group size was not significant, but variety of diet was. The paper, published last week in the journal Proceedings of the National Academy of Sciences, is accompanied online by videos showing the animals doing what looks for all the world like the shell game in which a player has to guess where the pea is. © 2014 The New York Times Company
|By Christof Koch Quantum physicist Wolfgang Pauli expressed disdain for sloppy, nonsensical theories by denigrating them as “not even wrong,” meaning they were just empty conjectures that could be quickly dismissed. Unfortunately, many remarkably popular theories of consciousness are of this ilk—the idea, for instance, that our experiences can somehow be explained by the quantum theory that Pauli himself helped to formulate in the early 20th century. An even more far-fetched idea holds that consciousness emerged only a few thousand years ago, when humans realized that the voices in their head came not from the gods but from their own internal spoken narratives. Not every theory of consciousness, however, can be dismissed as just so much intellectual flapdoodle. During the past several decades, two distinct frameworks for explaining what consciousness is and how the brain produces it have emerged, each compelling in its own way. Each framework seeks to explain a vast storehouse of observations from both neurological patients and sophisticated laboratory experiments. One of these—the Integrated Information Theory—devised by psychiatrist and neuroscientist Giulio Tononi, which I have described before in these pages [see “Ubiquitous Minds”; Scientific American Mind, January/February 2014], uses a mathematical expression to represent conscious experience and then derives predictions about which circuits in the brain are essential to produce these experiences. [Full disclosure: I have worked with Tononi on this theory.] In contrast, the Global Workspace Model of consciousness moves in the opposite direction. Its starting point is behavioral experiments that manipulate conscious experience of people in a very controlled setting. It then seeks to identify the areas of the brain that underlie these experiences. © 2014 Scientific American
Link ID: 19549 - Posted: 04.29.2014
By LAURENCE STEINBERG I’M not sure whether it’s a badge of honor or a mark of shame, but a paper I published a few years ago is now ranked No. 8 on a list of studies that other psychologists would most like to see replicated. Good news: People find the research interesting. Bad news: They don’t believe it. The paper in question, written with my former student Margo Gardner, appeared in the journal Developmental Psychology in July 2005. It described a study in which we randomly assigned subjects to play a video driving game, either alone or with two same-age friends watching them. The mere presence of peers made teenagers take more risks and crash more often, but no such effect was observed among adults. I find my colleagues’ skepticism surprising. Most people recall that as teenagers, they did far more reckless things when with their friends than when alone. Data from the Federal Bureau of Investigation indicate that many more juvenile crimes than adult crimes are committed in groups. And driving statistics conclusively show that having same-age passengers in the car substantially increases the risk of a teen driver’s crashing but has no similar impact when an adult is behind the wheel. Then again, I’m aware that our study challenged many psychologists’ beliefs about the nature of peer pressure, for it showed that the influence of peers on adolescent risk taking doesn’t rely solely on explicit encouragement to behave recklessly. Our findings also undercut the popular idea that the higher rate of real-world risk taking in adolescent peer groups is a result of reckless teenagers’ being more likely to surround themselves with like-minded others. My colleagues and I have replicated our original study of peer influences on adolescent risk taking several times since 2005. We have also shown that the reason teenagers take more chances when their peers are around is partly because of the impact of peers on the adolescent brain’s sensitivity to rewards. In a study of people playing our driving game, my colleague Jason Chein and I found that when teens were with people their own age, their brains’ reward centers became hyperactivated, which made them more easily aroused by the prospect of a potentially pleasurable experience. This, in turn, inclined teenagers to pay more attention to the possible benefits of a risky choice than to the likely costs, and to make risky decisions rather than play it safe. Peers had no such effect on adults’ reward centers, though. © 2014 The New York Times Company
Does reading faster mean reading better? That’s what speed-reading apps claim, promising to boost not just the number of words you read per minute, but also how well you understand a text. There’s just one problem: The same thing that speeds up reading actually gets in the way of comprehension, according to a new study. When you read at your natural pace, your eyes move back and forth across a sentence, rather than plowing straight through to the end. Apps like Spritz or the aptly named Speed Read are built around the idea that these eye movements, called saccades, are a redundant waste of time. It’s more efficient, their designers claim, to present words one at a time in a fixed spot on a screen, discouraging saccades and helping you get through a text more quickly. This method, called rapid serial visual presentation (RSVP), has been controversial since the 1980s, when tests showed it impaired comprehension, though researchers weren’t quite sure why. With a new crop of speed-reading products on the market, psychologists decided to dig a bit more and uncovered a simple explanation for RSVP’s flaw: Every so often, we need to scan backward and reread for a better grasp of the material. Researchers demonstrated that need by presenting 40 college students with ambiguous, unpunctuated sentences ("While the man drank the water that was clear and cold overflowed from the toilet”) while following their subjects’ gaze with an eye-tracking camera. Half the time, the team crossed out words participants had already read, preventing them from rereading (“xxxxx xxx xxx drank the water …”). Following up with basic yes-no questions about each sentence’s content, they found that comprehension dropped by about 25% in trials that blocked rereading versus those that didn’t, the researchers report online this month in Psychological Science. Crucially, the drop was about the same when subjects could, but simply hadn’t, reread parts of a sentence. Nor did the results differ much when using ambiguous sentences or their less confusing counterparts (“While the man slept the water …”). Turns out rereading isn’t a waste of time—it’s essential for understanding. © 2014 American Association for the Advancement of Science.
|By Stephen L. Macknik and Susana Martinez-Conde The Best Illusion of the Year Contest brings scientific and popular attention to perceptual oddities. Anyone can submit an illusion to next year's contest at http://illusionoftheyear.com/submission-instructions for the rules Decked out in a mask, cape and black spandex, a fit young man leaps onto the stage, one hand raised high, and bellows, “I am Japaneeeese Bat-Maaaaaan!” in a thick accent. The performer is neither actor nor acrobat. He is a mathematician named Jun Ono, hailing from Meiji University in Japan. Ono's single bound, front and center, at the Philharmonic Center for the Arts in Naples, Fla. (now called Artis-Naples), was the opening act of the ninth Best Illusion of the Year Contest, held May 13, 2013. Four words into the event, we knew Ono had won. Aside from showcasing new science, the contest celebrates our brain's wonderful and mistaken sense that we can accurately see, smell, hear, taste and touch the world around us. In reality, accuracy is not the brain's forte, as the illusion creators competing each year will attest. Yes, there is a real world out there, and you do perceive (some of) the events that occur around you, but you have never actually lived in reality. Instead your brain gathers pieces of data from your sensory systems—some of which are quite subjective or frankly wrong—and builds a simulation of the world. This simulation, which some call consciousness, becomes the universe in which you live. It is the only thing you have ever perceived. Your brain uses incomplete and flawed information to build this mental model and relies on quirky neural algorithms to often—but not always—obviate the flaws. Let us take a spin through some of the world's top illusions and their contributions to the science of perception. (To see videos of these illusions, see ScientificAmerican.com/may2014/illusions.) © 2014 Scientific American
It looks like a standardized test question: Is the sum of two numbers on the left or the single number on the right larger? Rhesus macaques that have been trained to associate numerical values with symbols can get the answer right, even if they haven’t passed a math class. The finding doesn’t just reveal a hidden talent of the animals—it also helps show how the mammalian brain encodes the values of numbers. Previous research has shown that chimpanzees can add single-digit numbers. But scientists haven’t explained exactly how, in the human or the monkey brain, numbers are being represented or this addition is being carried out. Now, a new study helps begin to answer those questions. Neurobiologist Margaret Livingstone of Harvard Medical School in Boston and her colleagues had already taught three rhesus macaques (Macaca mulatta) in the lab to associate the Arabic numbers 0 through 9 and 15 select letters with the values zero through 25. When given the choice between two symbols, monkeys reliably chose the larger to get a correspondingly larger number of droplets of water, apple juice, or orange soda as a reward. To test whether the monkeys could add these values, the researchers began giving them a choice between a sum and a single symbol rather than two single symbols. Within 4 months, the monkeys had learned how the task worked and were able to effectively add two symbols and compare the sum to a third, single symbol. To ensure that the monkeys hadn’t simply memorized every possible combination of symbols and associated a value with the combination—this wouldn’t be true addition—Livingstone’s team next taught the animals an entirely new set of symbols —Tetris-like blocks rather than letters and numbers. With the new symbols, the monkeys were again able to add—this time calculating the value of combinations they’d never seen before and confirming the ability to do basic addition, the team reports online today in the Proceedings of the National Academy of Sciences. © 2014 American Association for the Advancement of Science.
Link ID: 19518 - Posted: 04.22.2014
Forget cellphones; rambunctious friends may be the riskiest driver distraction for teens, according to a new study. Researchers installed video and G-force recorders in the vehicles of 52 newly licensed high school students for 6 months. They found that certain distractions, such as fiddling with the car’s controls and eating, were not strongly related to serious incidents, which included collisions and evasive maneuvers. However, when passengers in the car were engaged in loud conversation, teen drivers were six times more likely to have a serious incident. What’s more, horseplay increased risk by a factor of three whereas cellphone use only doubled it, the team reported online this week in the Journal of Adolescent Health. Forty-three states restrict newly licensed drivers from having more than one other teen in the car, and the study authors say their data suggest that's good policy. © 2014 American Association for the Advancement of Science.
By CLYDE HABERMAN Her surname in Italian means “slave,” and is pronounced skee-AH-vo. Grim as it may be, the word could apply to Theresa Marie Schiavo, even with its Americanized pronunciation: SHY-vo. For 15 years, Terri Schiavo was effectively a slave — slave to an atrophied brain that made her a prisoner in her body, slave to bitter fighting between factions of her family, slave to seemingly endless rounds of court hearings, slave to politicians who injected themselves into her tragedy and turned her ordeal into a national morality play. To this day, the name Schiavo is virtually a synonym for epic questions about when life ends and who gets to make that determination. It would be nice to believe that since Ms. Schiavo’s death nine years ago, America has found clear answers. Of course it has not, as is evident in Retro Report’s exploration of the Schiavo case, the latest video documentary in a weekly series that examines major news stories from the past and their aftermath. Ms. Schiavo, a married woman living in St. Petersburg, Fla., was 26 years old when she collapsed on Feb. 25, 1990. While her potassium level was later found to be abnormally low, an autopsy drew no conclusion as to why she had lost consciousness. Whatever the cause, her brain was deprived of oxygen long enough to leave her in a “persistent vegetative state,” a condition that is not to be confused with brain death. She could breathe without mechanical assistance. But doctors concluded that she was incapable of thought or emotion. After her death on March 31, 2005, an autopsy determined that the brain damage was irreversible. Between her collapse — when she “departed this earth,” as her grave marker puts it — and her death — when she became “at peace” — the nation bore witness to an increasingly acrimonious battle between her husband, Michael Schiavo, and her parents, Robert and Mary Schindler. Mr. Schiavo wanted to detach the feeding tube that gave her nourishment. Terri never would have wanted to be kept alive that way, he said. The Schindlers insisted that the tube be kept in place. That, they said, is what their daughter would have wanted. To Mr. Schiavo, the woman he had married was gone. To the Schindlers, a sentient human was still in that body. © 2014 The New York Times Company
Link ID: 19512 - Posted: 04.21.2014
By DENISE GRADY People with severe brain injuries sometimes emerge from a coma awake but unresponsive, leaving families with painful questions. Are they aware? Can they think and feel? Do they have any chance of recovery? A new study has found that PET scans may help answer these wrenching questions. It found that a significant number of people labeled vegetative had received an incorrect diagnosis and actually had some degree of consciousness and the potential to improve. Previous studies using electroencephalogram machines and M.R.I. scanners have also found signs of consciousness in supposedly vegetative patients. “I think these patients are kind of neglected by both medicine and society,” said Dr. Steven Laureys, an author of the new study and the director of the Coma Science Group at the University of Liège in Belgium. “Many of them don’t even see a medical doctor or a specialist for years. So I think it’s very important to ask the question, are they unconscious?” In the United States, 100,000 to 300,000 people are thought to be minimally conscious, and an additional 25,000 are vegetative. In Belgium, the combined incidence of the two conditions is about 150 new cases per year, Dr. Laureys said. An article about the new research was published on Tuesday in The Lancet. Dr. Laureys and his colleagues studied 122 patients with brain injuries, including 41 who had been declared vegetative — awake but with no behavioral signs of awareness. People who are vegetative for a year are thought to have little or no chance of recovering, and the condition can become grounds for withdrawing medical treatment. Terri Schiavo, in a vegetative state for 15 years, died in 2005 in Florida after courts allowed the removal of her feeding tube. © 2014 The New York Times Company
by Bethany Brookshire Every hipster knows that something is only cool before it becomes popular. There’s no point in liking a band once it hits the big time. That shirt is no good once it’s no longer ironic. And it’s certainly not enough to go clean shaven or grow a short beard — that’s much too mainstream. Recent years have seen a resurgence of moustaches, mutton chops and Fu Manchus. A style that really stands out sticks it to conformity. It turns out that when people buck the facial hair trend, they may end up making themselves more attractive. A new study published April 16 in Biology Letters shows that either clean-shaven or fully bearded looks become more attractive when they are rare in the population. The study suggests that humans may practice what’s called negative frequency-dependent selection — people rate rare looks as more attractive than they might otherwise. But when we try to figure out why, the interpretations can get pretty hairy. In every population, there is variation, both in genetics and in how individuals look. But at first blush, this variation doesn’t make a lot of sense. If one particular look is the most attractive and best for the population, sexual selection should make a species converge on a single, popular look. For example, if the best male guppies have stripes, soon all male guppies will have stripes, as females will only mate with stripey males. But in nature, this is clearly not the case. Guppies come in a wild variety of patterns, and so do humans. In guppies, this variation is a result of negative frequency-dependent selection: Female guppies prefer male guppies that look unusual compared to others, rather than guppies that share common features. This helps keep looks and genes variable, a distinct advantage for the species. So an individual guppy’s attractiveness doesn’t just depend on his shining character, it depends on how rare his looks are in relation to other guppies. © Society for Science & the Public 2000 - 2013
Simon Baron-Cohen, professor of developmental psychopathology at the University of Cambridge and director of the Autism Research Center, replies: Your mother is correct that the scientific evidence points to the brain of people with autism and Asperger's syndrome as being different but not necessarily “disordered.” Studies have shown that the brain in autism develops differently, in terms of both structure and function, compared with more typical patterns of development, and that certain parts of the brain are larger or smaller in people who have autism compared with those who have a more typical brain. One structural difference resides in the brain's corpus callosum, which connects the right and left hemispheres. Most studies show that the corpus callosum is smaller in certain sections in people with autism, which can limit connectivity among brain regions and help explain why people with autism have difficulty integrating complex ideas. An example of a functional difference is in the activity of the ventromedial prefrontal cortex, which is typically active in tasks involving theory of mind—the ability to imagine other people's thoughts and feelings—but is underactive when people with autism perform such tasks. The brain of those with autism also shows advantages. When some people with this condition are asked to complete detail-oriented tasks, such as finding a target shape in a design, they are quicker and more accurate. Additionally, those with autism generally exhibit less activity in the posterior parietal cortex, involved in visual and spatial perception, which suggests that their brain is performing the task more efficiently. © 2014 Scientific American
by Alix Spiegel It was late, almost 9 at night, when Justin Holden pulled the icy pizza box from the refrigerator at the Brookville Supermarket in Washington, D.C. He stood in front of the open door, scanning the nutrition facts label. A close relative had recently had a heart attack, and in the back of his mind there was this idea stalking him: If he put too much salt in his body, it would eventually kill him. For this reason the information in the label wasn't exactly soothing: 1,110 milligrams of sodium seemed like a lot. But there was even worse-sounding stuff at the bottom of the label. Words like "diglyceride," with a string of letters that clearly had no business sitting next to each other. It suggested that something deeply unnatural was sitting inside the box. "Obviously it's not good for me," the 20ish Holden said. "But, hopefully, I can let it slide in." He tucked the pizza under his arm, and headed one aisle over for a sports drink. Who among us has not had a moment like this? That intimate tete-a-tete with the nutrition label, searching out salt, sugar, fat, trying to discern: How will you affect me? Are you good? Or are you bad? Here's the thing you probably haven't stopped to consider: how the label itself is affecting you. "Labels are not just labels; they evoke a set of beliefs," says , a clinical psychologist who does research at the Columbia Business School in New York. A couple of years ago, Crum found herself considering what seems like a pretty strange question. She wanted to know whether the information conveyed by a nutritional label could physically change what happens to you — "whether these labels get under the skin literally," she says, "and actually affect the body's physiological processing of the nutrients that are consumed." ©2014 NPR
By ALAN SCHWARZ With more than six million American children having received a diagnosis of attention deficit hyperactivity disorder, concern has been rising that the condition is being significantly misdiagnosed and overtreated with prescription medications. Yet now some powerful figures in mental health are claiming to have identified a new disorder that could vastly expand the ranks of young people treated for attention problems. Called sluggish cognitive tempo, the condition is said to be characterized by lethargy, daydreaming and slow mental processing. By some researchers’ estimates, it is present in perhaps two million children. Experts pushing for more research into sluggish cognitive tempo say it is gaining momentum toward recognition as a legitimate disorder — and, as such, a candidate for pharmacological treatment. Some of the condition’s researchers have helped Eli Lilly investigate how its flagship A.D.H.D. drug might treat it. The Journal of Abnormal Child Psychology devoted 136 pages of its January issue to papers describing the illness, with the lead paper claiming that the question of its existence “seems to be laid to rest as of this issue.” The psychologist Russell Barkley of the Medical University of South Carolina, for 30 years one of A.D.H.D.’s most influential and visible proponents, has claimed in research papers and lectures that sluggish cognitive tempo “has become the new attention disorder.” In an interview, Keith McBurnett, a professor of psychiatry at the University of California, San Francisco, and co-author of several papers on sluggish cognitive tempo, said: “When you start talking about things like daydreaming, mind-wandering, those types of behaviors, someone who has a son or daughter who does this excessively says, ‘I know about this from my own experience.’ They know what you’re talking about.” © 2014 The New York Times Company
Link ID: 19479 - Posted: 04.12.2014
Anne Trafton | MIT News Office Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match. A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science. Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper. “The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.” In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.
Link ID: 19478 - Posted: 04.12.2014
In an op-ed in the Sunday edition of this newspaper, Barbara Ehrenreich, card-carrying liberal rationalist, writes about her own mystical experiences (the subject of her new book), and argues that the numinous deserves more cutting-edge scientific study: I appreciate the spirit (if you will) of this argument, but I am very doubtful as to its application. The trouble is that in its current state, cognitive science has a great deal of difficulty explaining “what happens” when “those wires connect” for non-numinous experience, which is why mysterian views of consciousness remain so potent even among thinkers whose fundamental commitments are atheistic and materialistic. (I’m going to link to the internet’s sharpest far-left scold for a good recent polemic on this front.) That is to say, even in contexts where it’s very easy to identify the physical correlative to a given mental state, and to get the kind of basic repeatability that the scientific method requires — show someone an apple, ask them to describe it; tell them to bite into it, ask them to describe the taste; etc. — there is no kind of scientific or philosophical agreement on what is actually happening to produce the conscious experience of the color “red,” the conscious experience of the crisp McIntosh taste, etc. So if we can’t say how this ”normal” conscious experience works, even when we can easily identify the physical stimulii that produce it, it seems exponentially harder to scientifically investigate the invisible, maybe-they-exist and maybe-they-don’t stimulii — be they divine, alien, or panpsychic — that Ehrenreich hypothesizes might produce more exotic forms of conscious experience. © 2014 The New York Times Company
If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science
By ANA GANTMAN and JAY VAN BAVEL TAKE a close look at your breakfast. Is that Jesus staring out at you from your toast? Such apparitions can be as lucrative as they are seemingly miraculous. In 2004, a Florida woman named Diane Duyser sold a decade-old grilled cheese sandwich that bore a striking resemblance to the Virgin Mary. She got $28,000 for it on eBay. The psychological phenomenon of seeing something significant in an ambiguous stimulus is called pareidolia. Virgin Mary grilled cheese sandwiches and other pareidolia remind us that almost any object is open to multiple interpretations. Less understood, however, is what drives some interpretations over others. In a forthcoming paper in the journal Cognition, we hope to shed some light on that question. In a series of experiments, we examined whether awareness of perceptually ambiguous stimuli was enhanced by the presence of moral content. We quickly flashed strings of letters on a computer screen and asked participants to indicate whether they believed each string formed a word or not. To ensure that the letter strings were perceptually ambiguous, we flashed them between approximately 40 and 70 milliseconds. (When they were presented for too long, people easily saw all the letter strings and demonstrated close to 100 percent accuracy. When they were presented too quickly, people were unable to see the words and performed “at chance,” around 50 percent accuracy.) Some of the strings of letters we flashed were words, others were not. Importantly, some of the words we flashed had moral content (virtue, steal, God) and others did not (virtual, steel, pet). Over the course of three experiments, we found that participants correctly identified strings of letters as words more often when they formed moral words (69 percent accuracy) than when they formed nonmoral words (65 percent accuracy). This suggested that moral content gave a “boost” to perceptually ambiguous stimuli — a shortcut to conscious awareness. We call this phenomenon the “moral pop-out effect.” © 2014 The New York Times Company
Link ID: 19453 - Posted: 04.07.2014
By Karen Kaplan There are lies, damn lies – and the lies that we tell for the sake of others when we are under the influence of oxytocin. Researchers found that after a squirt of the so-called love hormone, volunteers lied more readily about their results in a game in order to benefit their team. Compared with control subjects who were given a placebo, those on oxytocin told more extreme lies and told them with less hesitation, according to a study published Monday in Proceedings of the National Academy of Sciences. Oxytocin is a brain hormone that is probably best known for its role in helping mothers bond with their newborns. In recent years, scientists have been examining its role in monogamy and in strengthening trust and empathy in social groups. Sometimes, doing what’s good for the group requires lying. (Think of parents who fake their addresses to get their kids into a better school.) A pair of researchers from Ben-Gurion University of the Negev in Israel and the University of Amsterdam figured that oxytocin would play a role in this type of behavior, so they set up a series of experiments to test their hypothesis. The researchers designed a simple computer game that asked players to predict whether a virtual coin toss would wind up heads or tails. After seeing the outcome on a computer screen, players were asked to report whether their prediction was correct or not. In some cases, making the right prediction would earn a player’s team a small payment (the equivalent of about 40 cents). In other cases, a correct prediction would cost the team the same amount, and sometimes there was no payoff or cost. Los Angeles Times Copyright 2014