Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By DANIEL GOLEMAN Which will it be — the berries or the chocolate dessert? Homework or the Xbox? Finish that memo, or roam Facebook? Such quotidian decisions test a mental ability called cognitive control, the capacity to maintain focus on an important choice while ignoring other impulses. Poor planning, wandering attention and trouble inhibiting impulses all signify lapses in cognitive control. Now a growing stream of research suggests that strengthening this mental muscle, usually with exercises in so-called mindfulness, may help children and adults cope with attention deficit hyperactivity disorder and its adult equivalent, attention deficit disorder. The studies come amid growing disenchantment with the first-line treatment for these conditions: drugs. In 2007, researchers at the University of California, Los Angeles, published a study finding that the incidence of A.D.H.D. among teenagers in Finland, along with difficulties in cognitive functioning and related emotional disorders like depression, were virtually identical to rates among teenagers in the United States. The real difference? Most adolescents with A.D.H.D. in the United States were taking medication; most in Finland were not. “It raises questions about using medication as a first line of treatment,” said Susan Smalley, a behavior geneticist at U.C.L.A. and the lead author. In a large study published last year in The Journal of the American Academy of Child & Adolescent Psychiatry, researchers reported that while most young people with A.D.H.D. benefit from medications in the first year, these effects generally wane by the third year, if not sooner. “There are no long-term, lasting benefits from taking A.D.H.D. medications,” said James M. Swanson, a psychologist at the University of California, Irvine, and an author of the study. “But mindfulness seems to be training the same areas of the brain that have reduced activity in A.D.H.D.” © 2014 The New York Times Company
By DOLLY CHUGH, KATHERINE L. MILKMAN and MODUPE AKINOLA IN the world of higher education, we professors like to believe that we are free from the racial and gender biases that afflict so many other people in society. But is this self-conception accurate? To find out, we conducted an experiment. A few years ago, we sent emails to more than 6,500 randomly selected professors from 259 American universities. Each email was from a (fictional) prospective out-of-town student whom the professor did not know, expressing interest in the professor’s Ph.D. program and seeking guidance. These emails were identical and written in impeccable English, varying only in the name of the student sender. The messages came from students with names like Meredith Roberts, Lamar Washington, Juanita Martinez, Raj Singh and Chang Huang, names that earlier research participants consistently perceived as belonging to either a white, black, Hispanic, Indian or Chinese student. In total, we used 20 different names in 10 different race-gender categories (e.g. white male, Hispanic female). On a Monday morning, the emails went out — one email per professor — and then we waited to see which professors would write back to which students. We understood, of course, that some professors would naturally be unavailable or uninterested in mentoring. But we also knew that the average treatment of any particular type of student should not differ from that of any other — unless professors were deciding (consciously or not) which students to help on the basis of their race and gender. (This “audit” methodology has long been used to study intentional and unintentional bias in real-world decision-making, as it allows researchers to standardize much about the decision environment.) What did we discover? First comes the fairly good news, which we reported in a paper in Psychological Science. Despite not knowing the students, 67 percent of the faculty members responded to the emails, and remarkably, 59 percent of the responders even agreed to meet on the proposed date with a student about whom they knew little and who did not even attend their university. (We immediately wrote back to cancel those meetings.) © 2014 The New York Times Company
by Helen Thomson If you liked Inception, you're going to love this. People have been given the ability to control their dreams after a quick zap to their head while they sleep. Lucid dreaming is an intriguing state of sleep in which a person becomes aware that they are dreaming. As a result, they gain some element of control over what happens in their dream – for example, the dreamer could make a threatening character disappear or decide to fly to an exotic location. Researchers are interested in lucid dreaming because it can help probe what happens when we switch between conscious states, going from little to full awareness. In 2010, Ursula Voss at the J.W. Goethe University in Frankfurt, Germany, and her colleagues trained volunteers to move their eyes in a specific pattern during a lucid dream. By scanning their brains while they slept, Voss was able to show that lucid dreams coincided with elevated gamma brainwaves. This kind of brainwave occurs when groups of neurons synchronise their activity, firing together about 40 times a second. The gamma waves occurred mainly in areas situated towards the front of the brain, called the frontal and temporal lobes. Perchance to dream The team wanted to see whether gamma brainwaves caused the lucid dreams, or whether both were side effects of some other change. So Voss and her colleagues began another study in which they stimulated the brain of 27 sleeping volunteers, using a non-invasive technique called transcranial alternating current. © Copyright Reed Business Information Ltd.
Sam Kean For most of recorded history, human beings situated the mind — and by extension the soul — not within the brain but within the heart. When preparing mummies for the afterlife, for instance, ancient Egyptian priests removed the heart in one piece and preserved it in a ceremonial jar; in contrast, they scraped out the brain through the nostrils with iron hooks, tossed it aside for animals, and filled the empty skull with sawdust or resin. (This wasn’t a snarky commentary on their politicians, either—they considered everyone’s brain useless.) Most Greek thinkers also elevated the heart to the body’s summa. Aristotle pointed out that the heart had thick vessels to shunt messages around, whereas the brain had wispy, effete wires. The heart furthermore sat in the body’s center, appropriate for a commander, while the brain sat in exile up top. The heart developed first in embryos, and it responded in sync with our emotions, pounding faster or slower, while the brain just sort of sat there. Ergo, the heart must house our highest faculties. Meanwhile, though, some physicians had always had a different perspective on where the mind came from. They’d simply seen too many patients get beaned in the head and lose some higher faculty to think it all a coincidence. Doctors therefore began to promote a brain-centric view of human nature. And despite some heated debates over the centuries—especially about whether the brain had specialized regions or not—by the 1600s most learned men had enthroned the mind within the brain. A few brave scientists even began to search for that anatomical El Dorado: the exact seat of the soul within the brain. One such explorer was Swedish philosopher Emanuel Swedenborg, one of the oddest ducks to ever waddle across the stage of history. © 2014 Salon Media Group, Inc.
Link ID: 19601 - Posted: 05.12.2014
—By Indre Viskontas and Chris Mooney When the audio of Los Angeles Clippers owner Donald Sterling telling a female friend not to "bring black people" to his team's games hit the internet, the condemnations were immediate. It was clear to all that Sterling was a racist, and the punishment was swift: The NBA banned him for life. It was, you might say, a pretty straightforward case. When you take a look at the emerging science of what motivates people to behave in a racist or prejudiced way, though, matters quickly grow complicated. In fact, if there's one cornerstone finding when it comes to the psychological underpinnings of prejudice, it's that out-and-out or "explicit" racists—like Sterling—are just one part of the story. Perhaps far more common are cases of so-called "implicit" prejudice, where people harbor subconscious biases, of which they may not even be aware, but that come out in controlled psychology experiments. Much of the time, these are not the sort of people whom we would normally think of as racists. "They might say they think it's wrong to be prejudiced," explains New York University neuroscientist David Amodio, an expert on the psychology of intergroup bias. Amodio says that white participants in his studies "might write down on a questionnaire that they are positive in their attitudes towards black people…but when you give them a behavioral measure, of how they respond to pictures of black people, compared with white people, that's when we start to see the effects come out." You can listen to our interview with Amodio on the Inquiring Minds podcast below: Welcome to the world of implicit racial biases, which research suggests are all around us, and which can be very difficult for even the most well-intentioned person to control. ©2014 Mother Jones
By Diana Kwon Would you rather have $50 now or $100 two weeks from now? Even though the $100 is obviously the better choice, many people will opt for the $50. Both humans and animals show this tendency to place lower value on later rewards, a behavior known as temporal discounting. High rates of temporal discounting can lead to impulsive behavior, and at its worst, too much of this “now bias” is associated with pathological gambling, attention deficit hyperactivity disorder and drug addiction. What determines if you’ll be an impulsive decision-maker? New evidence suggests that for women, estrogen levels might be a factor. In a recent study published in the Journal of Neuroscience, Charlotte Boettiger and her team at the University of North Carolina revealed that greater increases in estrogen levels across the menstrual cycle led to less impulsive decision making. The researchers tested the “now bias” in 87 women between the ages of 18 and 40 at two different points in their menstrual cycle – in the menstrual phase when estrogen levels are low and the follicular phase when estrogen levels are high. Participants were given a delay-discounting task where they had to choose between two options: a certain sum of money at a later date or a discounted amount immediately (e.g. $100 in one week or $70 today). Subjects showed a greater bias toward the immediate choice during the menstrual phase of the cycle, when estrogen levels were low. Estrogen levels vary between women and can change with factors like stress and age. When the researchers measured amounts of estradiol (the dominant form of estrogen) from the saliva in a subset of the participants at the two points in their menstrual cycles, they found that not all of them showed a detectable increase. Only those with a measureable rise in estradiol showed a significant change in impulsive decision-making. © 2014 Scientific American
By Scott Barry Kaufman The latest neuroscience of aesthetics suggests that the experience of visual, musical, and moral beauty all recruit the same part of the “emotional brain”: field A1 of the medial orbitofrontal cortex (mOFC). But what about mathematics? Plato believed that mathematical beauty was the highest form of beauty since it is derived from the intellect alone and is concerned with universal truths. Similarly, the art critic Clive Bell noted: “Art transports us from the world of man’s activity to a world of aesthetic exaltation. For a moment we are shut off from human interests; our anticipations and memories are arrested; we are lifted above the stream of life. The pure mathematician rapt in his studies knows a state of mind which I take to be similar, if not identical. He feels an emotion for his speculations which arises from no perceived relation between them and the lives of men, but springs, inhuman or super-human, from the heart of an abstract science. I wonder, sometimes, whether the appreciators of art and of mathematical solutions are not even more closely allied.” A new study suggests that Bell might be right. Semir Zeki and colleagues recruited 16 mathematicians at the postgraduate or postdoctoral level as well as 12 non-mathematicians. All participants viewed a series of mathematical equations in the fMRI scanner and were asked to rate the beauty of the equations as well as their understanding of each equation. After they were out of the scanner, they filled out a questionnaire in which they reported their level of understanding of each equation as well as their emotional experience viewing the equations. © 2014 Scientific American
Link ID: 19586 - Posted: 05.08.2014
By Felicity Muth Imagine that you walk into a room, where three people are sitting, facing you. Their faces are oriented towards you, but all three of them have their eyes directed towards the left side of the room. You would probably follow their gaze to the point where they were looking (if you weren’t too unnerved to take your eyes off these odd people). As a social species, we are particularly cued in to social cues like following others’ gazes. However, we’re not the only animals that follow the gazes of members of our species: great apes, monkeys, lemurs, dogs, goats, birds and even tortoises follow each other’s gazes too. However, we don’t all follow gazes to the same extent. One species of macaque monkey (the stumptailed macaque) follows gazes a lot more than other macaque species, bonobos do it more than chimpanzees and human children follow gazes a lot more than other great ape species do. Species also differ in their understanding of what the other animal is looking at. For example, if we saw a person gazing at a point, and between them and this point was a barrier, whether the barrier was solid or transparent would affect how far we followed their gaze. This is because we imagine ourselves in their physical position and what they might be able to see. Bonobos and chimpanzees can also do this, but not the orang-utan. Like us, great apes and old world monkeys also will follow a gaze, but then look back at the individual gazing if they don’t see what the individual is gazing at (‘are you going crazy or am I just not seeing what you’re seeing?’). Capuchin and spider monkeys don’t seem to do this. So, even though a lot of animals are capable of following the gazes of others, there is a lot of variation in the extent and flexibility of this behaviour. A recent study looked to see whether chimpanzees, bonobos, orang-utans and humans would be more likely to follow their own species’ gazes than another species. © 2014 Scientific American
Brian Owens If you think you know what you just said, think again. People can be tricked into believing they have just said something they did not, researchers report this week. The dominant model of how speech works is that it is planned in advance — speakers begin with a conscious idea of exactly what they are going to say. But some researchers think that speech is not entirely planned, and that people know what they are saying in part through hearing themselves speak. So cognitive scientist Andreas Lind and his colleagues at Lund University in Sweden wanted to see what would happen if someone said one word, but heard themselves saying another. “If we use auditory feedback to compare what we say with a well-specified intention, then any mismatch should be quickly detected,” he says. “But if the feedback is instead a powerful factor in a dynamic, interpretative process, then the manipulation could go undetected.” In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said. Lind chose the words ‘grey’ and ‘green’ (grå and grön in Swedish) to switch, as they sound similar but have different meanings. © 2014 Nature Publishing Group
by Bethany Brookshire When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner. Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing. In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says. © Society for Science & the Public 2000 - 2013.
Intelligence is hard to test, but one aspect of being smart is self-control, and a version of the old shell game that works for many species suggests that brain size is very important. When it comes to animal intelligence, says Evan MacLean, co-director of Duke University’s Canine Cognition Center, don’t ask which species is smarter. “Smarter at what?” is the right question. Many different tasks, requiring many different abilities, are given to animals to measure cognition. And narrowing the question takes on particular importance when the comparisons are across species. So Dr. MacLean, Brian Hare and Charles Nunn, also Duke scientists who study animal cognition, organized a worldwide effort by 58 scientists to test 36 species on a single ability: self-control. This capacity is thought to be part of thinking because it enables animals to override a strong, nonthinking impulse, and to solve a problem that requires some analysis of the situation in front of them. The testing program, which took several international meetings to arrange, and about seven years to complete, looked at two common tasks that are accepted ways to judge self-control. It then tried to correlate how well the animals did on the tests with other measures, like brain size, diet and the size of their normal social groups. Unsurprisingly, the great apes did very well. Dogs and baboons did pretty well. And squirrel monkeys, marmosets and some birds were among the worst performers. Surprisingly, absolute brain size turned out to be a much better predictor of success than relative brain size, which has been thought to be a good indication of intelligence. Social group size was not significant, but variety of diet was. The paper, published last week in the journal Proceedings of the National Academy of Sciences, is accompanied online by videos showing the animals doing what looks for all the world like the shell game in which a player has to guess where the pea is. © 2014 The New York Times Company
|By Christof Koch Quantum physicist Wolfgang Pauli expressed disdain for sloppy, nonsensical theories by denigrating them as “not even wrong,” meaning they were just empty conjectures that could be quickly dismissed. Unfortunately, many remarkably popular theories of consciousness are of this ilk—the idea, for instance, that our experiences can somehow be explained by the quantum theory that Pauli himself helped to formulate in the early 20th century. An even more far-fetched idea holds that consciousness emerged only a few thousand years ago, when humans realized that the voices in their head came not from the gods but from their own internal spoken narratives. Not every theory of consciousness, however, can be dismissed as just so much intellectual flapdoodle. During the past several decades, two distinct frameworks for explaining what consciousness is and how the brain produces it have emerged, each compelling in its own way. Each framework seeks to explain a vast storehouse of observations from both neurological patients and sophisticated laboratory experiments. One of these—the Integrated Information Theory—devised by psychiatrist and neuroscientist Giulio Tononi, which I have described before in these pages [see “Ubiquitous Minds”; Scientific American Mind, January/February 2014], uses a mathematical expression to represent conscious experience and then derives predictions about which circuits in the brain are essential to produce these experiences. [Full disclosure: I have worked with Tononi on this theory.] In contrast, the Global Workspace Model of consciousness moves in the opposite direction. Its starting point is behavioral experiments that manipulate conscious experience of people in a very controlled setting. It then seeks to identify the areas of the brain that underlie these experiences. © 2014 Scientific American
Link ID: 19549 - Posted: 04.29.2014
By LAURENCE STEINBERG I’M not sure whether it’s a badge of honor or a mark of shame, but a paper I published a few years ago is now ranked No. 8 on a list of studies that other psychologists would most like to see replicated. Good news: People find the research interesting. Bad news: They don’t believe it. The paper in question, written with my former student Margo Gardner, appeared in the journal Developmental Psychology in July 2005. It described a study in which we randomly assigned subjects to play a video driving game, either alone or with two same-age friends watching them. The mere presence of peers made teenagers take more risks and crash more often, but no such effect was observed among adults. I find my colleagues’ skepticism surprising. Most people recall that as teenagers, they did far more reckless things when with their friends than when alone. Data from the Federal Bureau of Investigation indicate that many more juvenile crimes than adult crimes are committed in groups. And driving statistics conclusively show that having same-age passengers in the car substantially increases the risk of a teen driver’s crashing but has no similar impact when an adult is behind the wheel. Then again, I’m aware that our study challenged many psychologists’ beliefs about the nature of peer pressure, for it showed that the influence of peers on adolescent risk taking doesn’t rely solely on explicit encouragement to behave recklessly. Our findings also undercut the popular idea that the higher rate of real-world risk taking in adolescent peer groups is a result of reckless teenagers’ being more likely to surround themselves with like-minded others. My colleagues and I have replicated our original study of peer influences on adolescent risk taking several times since 2005. We have also shown that the reason teenagers take more chances when their peers are around is partly because of the impact of peers on the adolescent brain’s sensitivity to rewards. In a study of people playing our driving game, my colleague Jason Chein and I found that when teens were with people their own age, their brains’ reward centers became hyperactivated, which made them more easily aroused by the prospect of a potentially pleasurable experience. This, in turn, inclined teenagers to pay more attention to the possible benefits of a risky choice than to the likely costs, and to make risky decisions rather than play it safe. Peers had no such effect on adults’ reward centers, though. © 2014 The New York Times Company
Does reading faster mean reading better? That’s what speed-reading apps claim, promising to boost not just the number of words you read per minute, but also how well you understand a text. There’s just one problem: The same thing that speeds up reading actually gets in the way of comprehension, according to a new study. When you read at your natural pace, your eyes move back and forth across a sentence, rather than plowing straight through to the end. Apps like Spritz or the aptly named Speed Read are built around the idea that these eye movements, called saccades, are a redundant waste of time. It’s more efficient, their designers claim, to present words one at a time in a fixed spot on a screen, discouraging saccades and helping you get through a text more quickly. This method, called rapid serial visual presentation (RSVP), has been controversial since the 1980s, when tests showed it impaired comprehension, though researchers weren’t quite sure why. With a new crop of speed-reading products on the market, psychologists decided to dig a bit more and uncovered a simple explanation for RSVP’s flaw: Every so often, we need to scan backward and reread for a better grasp of the material. Researchers demonstrated that need by presenting 40 college students with ambiguous, unpunctuated sentences ("While the man drank the water that was clear and cold overflowed from the toilet”) while following their subjects’ gaze with an eye-tracking camera. Half the time, the team crossed out words participants had already read, preventing them from rereading (“xxxxx xxx xxx drank the water …”). Following up with basic yes-no questions about each sentence’s content, they found that comprehension dropped by about 25% in trials that blocked rereading versus those that didn’t, the researchers report online this month in Psychological Science. Crucially, the drop was about the same when subjects could, but simply hadn’t, reread parts of a sentence. Nor did the results differ much when using ambiguous sentences or their less confusing counterparts (“While the man slept the water …”). Turns out rereading isn’t a waste of time—it’s essential for understanding. © 2014 American Association for the Advancement of Science.
|By Stephen L. Macknik and Susana Martinez-Conde The Best Illusion of the Year Contest brings scientific and popular attention to perceptual oddities. Anyone can submit an illusion to next year's contest at http://illusionoftheyear.com/submission-instructions for the rules Decked out in a mask, cape and black spandex, a fit young man leaps onto the stage, one hand raised high, and bellows, “I am Japaneeeese Bat-Maaaaaan!” in a thick accent. The performer is neither actor nor acrobat. He is a mathematician named Jun Ono, hailing from Meiji University in Japan. Ono's single bound, front and center, at the Philharmonic Center for the Arts in Naples, Fla. (now called Artis-Naples), was the opening act of the ninth Best Illusion of the Year Contest, held May 13, 2013. Four words into the event, we knew Ono had won. Aside from showcasing new science, the contest celebrates our brain's wonderful and mistaken sense that we can accurately see, smell, hear, taste and touch the world around us. In reality, accuracy is not the brain's forte, as the illusion creators competing each year will attest. Yes, there is a real world out there, and you do perceive (some of) the events that occur around you, but you have never actually lived in reality. Instead your brain gathers pieces of data from your sensory systems—some of which are quite subjective or frankly wrong—and builds a simulation of the world. This simulation, which some call consciousness, becomes the universe in which you live. It is the only thing you have ever perceived. Your brain uses incomplete and flawed information to build this mental model and relies on quirky neural algorithms to often—but not always—obviate the flaws. Let us take a spin through some of the world's top illusions and their contributions to the science of perception. (To see videos of these illusions, see ScientificAmerican.com/may2014/illusions.) © 2014 Scientific American
It looks like a standardized test question: Is the sum of two numbers on the left or the single number on the right larger? Rhesus macaques that have been trained to associate numerical values with symbols can get the answer right, even if they haven’t passed a math class. The finding doesn’t just reveal a hidden talent of the animals—it also helps show how the mammalian brain encodes the values of numbers. Previous research has shown that chimpanzees can add single-digit numbers. But scientists haven’t explained exactly how, in the human or the monkey brain, numbers are being represented or this addition is being carried out. Now, a new study helps begin to answer those questions. Neurobiologist Margaret Livingstone of Harvard Medical School in Boston and her colleagues had already taught three rhesus macaques (Macaca mulatta) in the lab to associate the Arabic numbers 0 through 9 and 15 select letters with the values zero through 25. When given the choice between two symbols, monkeys reliably chose the larger to get a correspondingly larger number of droplets of water, apple juice, or orange soda as a reward. To test whether the monkeys could add these values, the researchers began giving them a choice between a sum and a single symbol rather than two single symbols. Within 4 months, the monkeys had learned how the task worked and were able to effectively add two symbols and compare the sum to a third, single symbol. To ensure that the monkeys hadn’t simply memorized every possible combination of symbols and associated a value with the combination—this wouldn’t be true addition—Livingstone’s team next taught the animals an entirely new set of symbols —Tetris-like blocks rather than letters and numbers. With the new symbols, the monkeys were again able to add—this time calculating the value of combinations they’d never seen before and confirming the ability to do basic addition, the team reports online today in the Proceedings of the National Academy of Sciences. © 2014 American Association for the Advancement of Science.
Link ID: 19518 - Posted: 04.22.2014
Forget cellphones; rambunctious friends may be the riskiest driver distraction for teens, according to a new study. Researchers installed video and G-force recorders in the vehicles of 52 newly licensed high school students for 6 months. They found that certain distractions, such as fiddling with the car’s controls and eating, were not strongly related to serious incidents, which included collisions and evasive maneuvers. However, when passengers in the car were engaged in loud conversation, teen drivers were six times more likely to have a serious incident. What’s more, horseplay increased risk by a factor of three whereas cellphone use only doubled it, the team reported online this week in the Journal of Adolescent Health. Forty-three states restrict newly licensed drivers from having more than one other teen in the car, and the study authors say their data suggest that's good policy. © 2014 American Association for the Advancement of Science.
By CLYDE HABERMAN Her surname in Italian means “slave,” and is pronounced skee-AH-vo. Grim as it may be, the word could apply to Theresa Marie Schiavo, even with its Americanized pronunciation: SHY-vo. For 15 years, Terri Schiavo was effectively a slave — slave to an atrophied brain that made her a prisoner in her body, slave to bitter fighting between factions of her family, slave to seemingly endless rounds of court hearings, slave to politicians who injected themselves into her tragedy and turned her ordeal into a national morality play. To this day, the name Schiavo is virtually a synonym for epic questions about when life ends and who gets to make that determination. It would be nice to believe that since Ms. Schiavo’s death nine years ago, America has found clear answers. Of course it has not, as is evident in Retro Report’s exploration of the Schiavo case, the latest video documentary in a weekly series that examines major news stories from the past and their aftermath. Ms. Schiavo, a married woman living in St. Petersburg, Fla., was 26 years old when she collapsed on Feb. 25, 1990. While her potassium level was later found to be abnormally low, an autopsy drew no conclusion as to why she had lost consciousness. Whatever the cause, her brain was deprived of oxygen long enough to leave her in a “persistent vegetative state,” a condition that is not to be confused with brain death. She could breathe without mechanical assistance. But doctors concluded that she was incapable of thought or emotion. After her death on March 31, 2005, an autopsy determined that the brain damage was irreversible. Between her collapse — when she “departed this earth,” as her grave marker puts it — and her death — when she became “at peace” — the nation bore witness to an increasingly acrimonious battle between her husband, Michael Schiavo, and her parents, Robert and Mary Schindler. Mr. Schiavo wanted to detach the feeding tube that gave her nourishment. Terri never would have wanted to be kept alive that way, he said. The Schindlers insisted that the tube be kept in place. That, they said, is what their daughter would have wanted. To Mr. Schiavo, the woman he had married was gone. To the Schindlers, a sentient human was still in that body. © 2014 The New York Times Company
Link ID: 19512 - Posted: 04.21.2014
By DENISE GRADY People with severe brain injuries sometimes emerge from a coma awake but unresponsive, leaving families with painful questions. Are they aware? Can they think and feel? Do they have any chance of recovery? A new study has found that PET scans may help answer these wrenching questions. It found that a significant number of people labeled vegetative had received an incorrect diagnosis and actually had some degree of consciousness and the potential to improve. Previous studies using electroencephalogram machines and M.R.I. scanners have also found signs of consciousness in supposedly vegetative patients. “I think these patients are kind of neglected by both medicine and society,” said Dr. Steven Laureys, an author of the new study and the director of the Coma Science Group at the University of Liège in Belgium. “Many of them don’t even see a medical doctor or a specialist for years. So I think it’s very important to ask the question, are they unconscious?” In the United States, 100,000 to 300,000 people are thought to be minimally conscious, and an additional 25,000 are vegetative. In Belgium, the combined incidence of the two conditions is about 150 new cases per year, Dr. Laureys said. An article about the new research was published on Tuesday in The Lancet. Dr. Laureys and his colleagues studied 122 patients with brain injuries, including 41 who had been declared vegetative — awake but with no behavioral signs of awareness. People who are vegetative for a year are thought to have little or no chance of recovering, and the condition can become grounds for withdrawing medical treatment. Terri Schiavo, in a vegetative state for 15 years, died in 2005 in Florida after courts allowed the removal of her feeding tube. © 2014 The New York Times Company
by Bethany Brookshire Every hipster knows that something is only cool before it becomes popular. There’s no point in liking a band once it hits the big time. That shirt is no good once it’s no longer ironic. And it’s certainly not enough to go clean shaven or grow a short beard — that’s much too mainstream. Recent years have seen a resurgence of moustaches, mutton chops and Fu Manchus. A style that really stands out sticks it to conformity. It turns out that when people buck the facial hair trend, they may end up making themselves more attractive. A new study published April 16 in Biology Letters shows that either clean-shaven or fully bearded looks become more attractive when they are rare in the population. The study suggests that humans may practice what’s called negative frequency-dependent selection — people rate rare looks as more attractive than they might otherwise. But when we try to figure out why, the interpretations can get pretty hairy. In every population, there is variation, both in genetics and in how individuals look. But at first blush, this variation doesn’t make a lot of sense. If one particular look is the most attractive and best for the population, sexual selection should make a species converge on a single, popular look. For example, if the best male guppies have stripes, soon all male guppies will have stripes, as females will only mate with stripey males. But in nature, this is clearly not the case. Guppies come in a wild variety of patterns, and so do humans. In guppies, this variation is a result of negative frequency-dependent selection: Female guppies prefer male guppies that look unusual compared to others, rather than guppies that share common features. This helps keep looks and genes variable, a distinct advantage for the species. So an individual guppy’s attractiveness doesn’t just depend on his shining character, it depends on how rare his looks are in relation to other guppies. © Society for Science & the Public 2000 - 2013