Chapter 15. Language and Our Divided Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By BRETT MICHAEL DYKES JEFFERSON, La. — “He liked to hit people,” Carlene Dempsey said flatly. “He didn’t care if he got his bell rung.” She was referring to her Falstaffian husband, Tom Dempsey, the former N.F.L. kicker born without toes on his right foot who in November 1970 — after a long night of drinking and debauchery in the French Quarter of New Orleans — set the league record for the longest field goal in a regular-season game. The 63-yard kick lifted the New Orleans Saints to a 19-17 victory over the Detroit Lions, and in the process helped transform Dempsey into a folk hero in the city hosting the Super Bowl on Sunday, the rare Saints player to hold a prominent N.F.L. record before the Sean Payton era. Now 66, Dempsey sat recently with his wife at the dining room table in the modest 1,500-square-foot home they share with their daughter, Ashley, and their grandson, Dylan, in this New Orleans suburb. It quickly became apparent that when reflecting upon his football career, Dempsey seemed to take more delight discussing the hits he had delivered than the kicks he had made. He wistfully recalled how, in high school and college, if his coaches wanted someone on the opposing team knocked out, they usually called on him to deliver a teeth-rattling hit. And his eyes twinkled with glee when he talked about how the coaches he played for over the course of his 10-year N.F.L. career with the Saints, the Eagles, the Rams, the Oilers and the Bills would sometimes call on him to be the wedge buster — football’s version of a kamikaze pilot — on kickoffs. “I would hit anybody,” Dempsey boasted, echoing the sentiment of Carlene, his wife of more than 40 years. “I didn’t care.” © 2013 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 17721 - Posted: 01.28.2013
by Tracy Staedter In this sweet video, a wild bottlenose dolphin slowly approaches a diver, who is with a group that’s watching manta rays near Kona, Hawaii. The dolphin rolls to one side, apparently showing the diver, named Keller Laros, that it’s tangled in fishing net and has a hook stuck in its fin. According to Yahoo News, the dolphin surfaced once for a breath air during the procedure and then returned to the diver, who finished the job of cutting away the net and removing the hook. Once the dolphin was free, it swam away. © 2013 Discovery Communications, LLC
By Stephen Ornes New babies eat, sleep, cry, poop — and listen. But their eavesdropping begins before birth and may include language lessons, says a new study. Scientists believe such early learning may help babies quickly understand their parents. Christine Moon is a psychologist at Pacific Lutheran University in Tacoma, Wash. She led the new study, to be published in February. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” she told Science News. A prenatal event happens before birth. Scientists have known that about 10 weeks before birth, a fetus can hear sounds outside the womb. Those sounds include the volume and rhythm of a person’s voice. But Moon found evidence that fetuses may also be starting to learn language itself. Moon and her coworkers tested whether newborns could detect differences in vowel sounds. These sounds are the loudest in human speech. Her team reports that newborns responded one way when they heard sounds like those from their parents’ language. And the newborns responded another way when they heard sounds like those from a foreign language. This was true among U.S. and Swedish babies who listened to sounds similar to English vowels and Swedish vowels. These responses show that shortly after birth, babies can group together familiar speech sounds, Moon told Science News. © 2013 Copyright Science News for Kids
Some animals are more eloquent than previously thought and have a communication structure similar to the vowel and consonant system of humans, according to new research. Studying the abbreviated call of the mongoose, researchers at the University of Zurich have found they are the first animals to communicate with sound units that are even smaller than syllables and yet still contain information about who is calling and why. Usually, animals can only produce a limited number of distinguishable sounds and calls due to their anatomy. While whale and bird songs are a little more complex than most animal sounds — in that they are repeatedly combined with new arrangements — they don’t pattern themselves after human syllables with their combination of vowels and consonants. Studying wild banded mongooses in Uganda, behavioural biologists discovered that the calls of the animals are structured and contain different information — a sound structure that has some similarities to the vowel and consonant system of human speech. Banded mongooses live in the savannah regions of the Sahara. They are small predators that live in groups of around 20 and are related to the meerkat. The scientists recorded calls of the mongoose and made acoustic analyses of them. The calls, which last between 50 and 150 milliseconds, could be compared to one "syllable," the researchers found. © CBC 2013
By MARY PILON and KEN BELSON The former N.F.L. linebacker Junior Seau had a degenerative brain disease linked to repeated head trauma when he committed suicide in the spring, the National Institutes of Health said Thursday. The findings were consistent with chronic traumatic encephalopathy, a degenerative brain disease widely connected to athletes who have absorbed frequent blows to the head, the N.I.H. said in a statement. Seau is the latest and most prominent player to be associated with the disease, which has bedeviled football in recent years as a proliferation of studies has exposed the possible long-term cognitive impact of head injuries sustained on the field. “The type of findings seen in Mr. Seau’s brain have been recently reported in autopsies of individuals with exposure to repetitive head injury,” the N.I.H. said, “including professional and amateur athletes who played contact sports, individuals with multiple concussions, and veterans exposed to blast injury and other trauma.” Since C.T.E. was diagnosed in the brain of the former Eagles defensive back Andre Waters after his suicide in 2006, the disease has been found in nearly every former player whose brain was examined posthumously. (C.T.E. can be diagnosed only posthumously.) Researchers at Boston University, who pioneered the study of C.T.E., have found it in 33 of the 34 brains of former N.F.L. players they have examined. The N.I.H. began its examination of Seau’s brain tissue in July. In addition to being reviewed by two federal neuropathologists, Seau’s brain was reviewed by three outside neuropathology experts who did not have knowledge of the source of the tissue. Upon initial examination “the brain looked normal,” according to the N.I.H. It was not until doctors looked under the microscope and used staining techniques that the C.T.E. abnormalities were seen. © 2013 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 17668 - Posted: 01.12.2013
By Bruce Bower Babies may start to learn their mother tongues even before seeing their mothers’ faces. Newborns react differently to native and foreign vowel sounds, suggesting that language learning begins in the womb, researchers say. Infants tested seven to 75 hours after birth treated spoken variants of a vowel sound in their home language as similar, evidence that newborns regard these sounds as members of a common category, say psychologist Christine Moon of Pacific Lutheran University in Tacoma, Wash., and her colleagues. Newborns deemed different versions of a foreign vowel sound to be dissimilar and unfamiliar, the scientists report in an upcoming Acta Paediatrica. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” Moon says. Fetuses can hear outside sounds by about 10 weeks before birth. Until now, evidence suggested that prenatal learning was restricted to the melody, rhythm and loudness of voices (SN: 12/5/09, p. 14). Earlier investigations established that 6-month-olds group native but not foreign vowel sounds into categories. Moon and colleagues propose that, in the last couple months of gestation, babies monitor at least some vowels — the loudest and most expressive speech sounds — uttered by their mothers. © Society for Science & the Public 2000 - 2013
By Breanna Draxler Infants are known for their impressive ability to learn language, which most scientists say kicks in somewhere around the six-month mark. But a new study indicates that language recognition may begin even earlier, while the baby is still in the womb. Using a creative means of measurement, researchers found that babies could already recognize their mother tongue by the time they left their mothers’ bodies. The researchers tested American and Swedish newborns between seven hours and three days old. Each baby was given a pacifier hooked up to a computer. When the baby sucked on the pacifier, it triggered the computer to produce a vowel sound—sometimes in English and sometimes in Swedish. The vowel sound was repeated until the baby stopped sucking. When the baby resumed sucking, a new vowel sound would start. The sucking was used as a metric to determine the babies’ interest in each vowel sound. More interest meant more sucks, according to the study soon to be published in Acta Paediatrica. In both countries, babies sucked on the pacifier longer when they heard foreign vowel sounds as compared to those of their mom’s native language. The researchers suggest that this is because the babies already recognize the vowels from their mothers and were keen to learn new ones. Hearing develops in a baby’s brain at around the 30th week of pregnancy, which leaves the last 10 weeks of gestation for babies to put that newfound ability to work. Baby brains are quick to learn, so a better understanding of these mechanisms may help researchers figure out how to improve the learning process for the rest of us.
By Ben Thomas They called him “Diogenes the Cynic,” because “cynic” meant “dog-like,” and he had a habit of basking naked on the lawn while his fellow philosophers talked on the porch. While they debated the mysteries of the cosmos, Diogenes preferred to soak up some rays – some have called him the Jimmy Buffett of ancient Greece. Anyway, one morning, the great philosopher Plato had a stroke of insight. He caught everyone’s attention, gathered a crowd around him, and announced his deduction: “Man is defined as a hairless, featherless, two-legged animal!” Whereupon Diogenes abruptly leaped up from the lawn, dashed off to the marketplace, and burst back onto the porch carrying a plucked chicken – which he held aloft and shouted, “Behold: I give you… Man!” I’m sure Plato was less than thrilled at this stunt, but the story reminds us that these early philosophers were still hammering out the most basic tenets of the science we now know as taxonomy: The grouping of objects from the world into abstract categories. This technique of chopping up reality wasn’t invented in ancient Greece, though. In fact, as a recent study shows, it’s fundamental to the way our brains work. At the most basic level, we don’t really perceive separate objects at all – we perceive our nervous systems’ responses to a boundless flow of electromagnetic waves and biochemical reactions. Our brains slot certain neural response patterns into sensory pathways we call “sight,” “smell” and so on – but abilities like synesthesia and echolocation show that even the boundaries between our senses can be blurry. © 2012 Scientific American
By Daisy Yuhas At least 1 in 4000 infants is born without a corpus callosum. This powerful body of connective white matter serves as the primary bridge between the brain’s hemispheres, allowing us to rapidly integrate complex information. “It’s a hidden disability,” says University of California Institute of Technology psychologist Lynn Paul. Many born without this structure go undiagnosed for years—only neuroimaging can confirm the agenesis, or failed development, of this brain area. Instead people are diagnosed with disorders such as autism, depression, or ADHD. People born without a corpus callosum face many challenges. Some have other brain malformations as well—and as a result individuals can exhibit a range of behavioral and cognitive outcomes, from severe cognitive deficits to mild learning delays. Paul is also the founding president of the National Organization of Disorders of the Corpus Callosum, a non-profit that offers resources and support to those affected and their families. She believes psychologists and neuroscientists can learn much from this disorder, including how varied biological problems can result in the same behavioral outcomes. But what may be most remarkable is how the acallosal brain adapts to its limitations and finds new connective routes. Precisely how the brain does this is a biological mystery, but there are several possible routes of compensation, which effectively re-route the brain’s connections in novel ways. Similarly, each individual born with this condition must find his or her own way to overcome unique challenges. As is clear from their stories, individuals often find strength in one another and in sharing their experiences. © 2012 Scientific American
By KEN BELSON The growing evidence of a link between head trauma and long-term, degenerative brain disease was amplified in an extensive study of athletes, military veterans and others who absorbed repeated hits to the head, according to new findings published in the scientific journal Brain. The study, which included brain samples taken posthumously from 85 people who had histories of repeated mild traumatic brain injury, added to the mounting body of research revealing the possible consequences of routine hits to the head in sports like football and hockey. The possibility that such mild head trauma could result in long-term cognitive impairment has come to vex sports officials, team doctors, athletes and parents in recent years. Of the group of 85 people, 80 percent (68 men) — nearly all of whom played sports — showed evidence of chronic traumatic encephalopathy, or C.T.E., a degenerative and incurable disease whose symptoms can include memory loss, depression and dementia. Among the group found to have C.T.E., 50 were football players, including 33 who played in the N.F.L. Among them were stars like Dave Duerson, Cookie Gilchrist and John Mackey. Many of the players were linemen and running backs, positions that tend to have more contact with opponents. Six high school football players, nine college football players, seven pro boxers and four N.H.L. players, including Derek Boogaard, the former hockey enforcer who died from an accidental overdose of alcohol and painkillers, also showed signs of C.T.E. The study also included 21 veterans, most of whom were also athletes, who showed signs of C.T.E. © 2012 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 17565 - Posted: 12.03.2012
Philip Ball Learning to read Chinese might seem daunting to Westerners used to an alphabetic script, but brain scans of French and Chinese native speakers show that people harness the same brain centres for reading across cultures. The findings are published today in the Proceedings of the National Academy of Sciences1. Reading involves two neural systems: one that recognizes the shape of the word and a second that asseses the physical movements used to make the marks on a page, says study leader Stanislas Dehaene, a cognitive neuroscientist the National Institute of Health and Medical Research in Gif-sur-Yvette, France. But it has been unclear whether the brain networks responsible for reading are universal or culturally distinct. Previous studies have suggested that alphabetic writing systems (such as French) and logographic ones (such as Chinese, in which single characters represent entire words) writing systems might engage different networks in the brain. To explore this question, Dehaene and his colleagues used functional magnetic resonance imaging to examine brain activity in Chinese and French people while they read their native languages. The researchers found that both Chinese and French people use the visual and gestural systems while reading their native language, but with different emphases that reflect the different demands of each language. © 2012 Nature Publishing Group
Link ID: 17540 - Posted: 11.27.2012
By Bruce Bower MINNEAPOLIS — Baboons use the order of regularly appearing letter pairs to tell words from nonwords, new evidence suggests. Psychologist Jonathan Grainger of the University of Aix-Marseille reported earlier this year that baboons can learn to tell real four-letter words from nonsense words (SN: 5/5/12, p. 5). But whether these animals detect signature letter combinations that enable their impressive word feats has been tough to demonstrate. Monkeys that previously learned to excel on this task are more likely to mistake nonwords created by reversing two letters of a word they already recognize as real, much as literate people do, Grainger reported November 16 at the Psychonomics Society annual meeting. “Letters played a role in baboons’ word knowledge,” Grainger concluded. “This is a starting point for determining how they discriminate words from nonwords.” Grainger’s team tested the six baboons in their original investigation. Some of the monkeys had previously learned to recognize many more words than others. In new trials, the best word identifiers made more errors than their less successful peers when shown nonwords that differed from known words by a reversed letter combination, such as WSAP instead of WASP and KTIE instead of KITE. Grainger’s team fed the same series of words and nonwords into a computer simulation of the experiment. The computer model best reproduced the animals’ learning curves when endowed with a capacity for tracking letter combinations. © Society for Science & the Public 2000 - 2012
by Douglas Heaven MEANINGS of words can be hard to locate when they are on the tip of your tongue, let alone in the brain. Now, for the first time, patterns of brain activity have been matched with the meanings of specific words. The discovery is a step forward in our attempts to read thoughts from brain activity alone, and could help doctors identify awareness in people with brain damage. Machines can already eavesdrop on our brains to distinguish which words we are listening toMovie Camera, but Joao Correia at Maastricht University in the Netherlands wanted to get beyond the brain's representation of the words themselves and identify the activity that underlies their meaning. Somewhere in the brain, he hypothesised, written and spoken representations of words are integrated and meaning is processed. "We wanted to find the hub," he says. To begin the hunt, Correia and his colleagues used an fMRI scanner to study the brain activity of eight bilingual volunteers as they listened to the names of four animals, bull, horse, shark and duck, spoken in English. The team monitored patterns of neural activity in the left anterior temporal cortex - known to be involved in a range of semantic tasks - and trained an algorithm to identify which word a participant had heard based on the pattern of activity. Since the team wanted to pinpoint activity related to meaning, they picked words that were as similar as possible - all four contain one syllable and belong to the concept of animals. They also chose words that would have been learned at roughly the same time of life and took a similar time for the brain to process. © Copyright Reed Business Information Ltd.
Daniel Cressey Rappers making up rhymes on the fly while in a brain scanner have provided an insight into the creative process. Freestyle rapping — in which a performer improvises a song by stringing together unrehearsed lyrics — is a highly prized skill in hip hop. But instead of watching a performance in a club, Siyuan Liu and Allen Braun, neuroscientists at the US National Institute on Deafness and Other Communication Disorders in Bethesda, Maryland, and their colleagues had 12 rappers freestyle in a functional magnetic resonance imaging (fMRI) machine. The artists also recited a set of memorized lyrics chosen by the researchers. By comparing the brain scans from rappers taken during freestyling to those taken during the rote recitation, they were able to see which areas of the brain are used during improvisation. The study is published today in Scientific Reports1. The results parallel previous imaging studies in which Braun and Charles Limb, a doctor and musician at Johns Hopkins University in Baltimore, Maryland, looked at fMRI scans from jazz musicians2. Both sets of artists showed lower activity in part of their frontal lobes called the dorsolateral prefrontal cortex during improvisation, and increased activity in another area, called the medial prefrontal cortex. The areas that were found to be ‘deactivated’ are associated with regulating other brain functions. © 2012 Nature Publishing Group
By Meghan Rosen Michael McAlpine’s shiny circuit doesn’t look like something you would stick in your mouth. It’s dashed with gold, has a coiled antenna and is glued to a stiff rectangle. But the antenna flexes, and the rectangle is actually silk, its stiffness melting away under water. And if you paste the device on your tooth, it could keep you healthy. The electronic gizmo is designed to detect dangerous bacteria and send out warning signals, alerting its bearer to microbes slipping past the lips. Recently, McAlpine, of Princeton University, and his colleagues spotted a single E. coli bacterium skittering across the surface of the gadget’s sensor. The sensor also picked out ulcer-causing H. pylori amid the molecular medley of human saliva, the team reported earlier this year in Nature Communications. At about the size of a standard postage stamp, the dental device is still too big to fit comfortably in a human mouth. “We had to use a cow tooth,” McAlpine says, describing test experiments. But his team plans to shrink the gadget so it can nestle against human enamel. McAlpine is convinced that one day, perhaps five to 10 years from now, everyone will wear some sort of electronic device. “It’s not just teeth,” he says. “People are going to be bionic.” McAlpine belongs to a growing pack of tech-savvy scientists figuring out how to merge the rigid, brittle materials of conventional electronics with the soft, curving surfaces of human tissues. Their goal: To create products that have the high performance of silicon wafers — the crystalline material used in computer chips — while still moving with the body. © Society for Science & the Public 2000 - 2012
Link ID: 17455 - Posted: 11.05.2012
By Maureen McCarthy October 30th marked the five-year anniversary of the death of my friend Washoe. Washoe was a wonderful friend. She was confident and self-assured. She was a matriarch, a mother figure not only to her adopted son but to others as well. She was kind and caring, but she didn’t suffer fools. Washoe also happened to be known around the world as the first nonhuman to acquire aspects of a human language, American Sign Language. You see, my friend Washoe was a chimpanzee. Washoe was born somewhere in West Africa around September 1965. Much like the chimpanzees I study here in Uganda, Washoe’s mother cared for her during infancy, nursing her, carrying her, and sharing her sleeping nests with her. That changed when her mother was killed so baby Washoe could be taken from her forest home, then bought by the US Air Force for use in biomedical testing. Washoe was not used in this sort of testing, however. Instead, Drs. Allen and Beatrix Gardner of the University of Nevada chose her among the young chimpanzees at Holloman Aeromedical Laboratory to be cross-fostered. Cross-fostering occurs when a youngster of one species is reared by adults of a different species. In this case, humans raised Washoe exactly as if she were a deaf human child. She learned to brush her teeth, drink from cups, and dress herself, in the same way a human child learns these behaviors. She was also exposed to humans using sign language around her. In fact, humans used only American Sign Language (ASL) to communicate in Washoe’s presence, avoiding spoken English so as to replicate as accurately as possible the learning environment of a young human exposed to sign language. © 2012 Scientific American
SAM KIM, Associated Press SEOUL, South Korea (AP) — An elephant in a South Korean zoo is using his trunk to pick up not only food, but also human vocabulary. An international team of scientists confirmed Friday what the Everland Zoo has been saying for years: Their 5.5-ton tusker Koshik has an unusual and possibly unprecedented talent. The 22-year-old Asian elephant can reproduce five Korean words by tucking his trunk inside his mouth to modulate sound, the scientists said in a joint paper published online in Current Biology. They said he may have started imitating human speech because he was lonely. Koshik can reproduce "annyeong" (hello), "anja" (sit down), "aniya" (no), "nuwo" (lie down) and "joa" (good), the paper says. One of the researchers said there is no conclusive evidence that Koshik understands the sounds he makes, although the elephant does respond to words like "anja." Everland Zoo officials in the city of Yongin said Koshik also can imitate "ajik" (not yet), but the researchers haven't confirmed the accomplishment. Koshik is particularly good with vowels, with a rate of similarity of 67 percent, the researchers said. For consonants he scores only 21 percent. Researchers said the clearest scientific evidence that Koshik is deliberately imitating human speech is that the sound frequency of his words matches that of his trainers. © 2012 Hearst Communications Inc.
A screening test for children starting school that could accurately detect early signs of a persistent stutter is a step closer, experts say. The Wellcome Trust team says a specific speech test accurately predicts whose stutter will persist into their teens. About one in 20 develops a stutter before age five - but just one in 100 stutter as a teen and identifying these children has so far been difficult. Campaigners said it was key for children to be diagnosed early. Stuttering tends to start at about three years old. Four out of five will recover without intervention, often within a couple of years. But for one in five, their stutter will persist and early therapy can be of significant benefit. The researchers, based at University College London, used a test developed in the US called SSI-3 (stuttering severity instrument). In earlier work, they followed eight-year-olds with a stutter into their teens. They found that the SSI-3 test was a reliable indicator of who would still have a stutter and who would recover - while other indicators such as family history, which have been used, were less so. BBC © 2012
Ewen Callaway “Who told me to get out?” asked a diver, surfacing from a tank in which a whale named NOC lived. The beluga’s caretakers had heard what sounded like garbled phrases emanating from the enclosure before, and it suddenly dawned on them that the whale might be imitating the voices of his human handlers. The outbursts — described today in Current Biology1 and originally at a 1985 conference — began in 1984 and lasted for about four years, until NOC hit sexual maturity, says Sam Ridgway, a marine biologist at National Marine Mammal Foundation in San Diego, California. He believes that NOC learned to imitate humans by listening to them speak underwater and on the surface. A few animals, including various marine mammals, songbirds and humans, routinely learn and imitate the songs and sounds of others. And Ridgway’s wasn’t the first observation of vocal mimicry in whales. In the 1940s, scientists heard wild belugas (Delphinapterus leucas) making calls that sounded like “children shouting in the distance”2. Decades later, keepers at the Vancouver Aquarium in Canada described a beluga that seemed to utter his name, Lagosi. Ridgway’s team recorded NOC, who is named after the tiny midges colloquially known as no-see-ums found near where he was legally caught by Inuit hunters in Manitoba, Canada, in the late 1970s. His human-like calls are several octaves lower than normal whale calls, a similar pitch to human speech. After training NOC to 'speak' on command, Ridgway’s team determined that he makes the sounds by increasing the pressure of the air that courses through his naval cavities. They think that he then modified the sounds by manipulating the shape of his phonic lips, small vibrating structures that sit above each nasal cavity. © 2012 Nature Publishing Group
Young people who sustain brain injuries are more likely to commit crimes and end up in prison, research suggests. The University of Exeter study says such injuries can lead maturing brains to "misfire", affecting judgement and the ability to control impulses. It calls for greater monitoring and treatment to prevent later problems. The findings echo a separate report by the Children's Commissioner for England on the impact of injuries on maturing brains and the social consequences. In the report, Repairing Shattered Lives, Professor Huw Williams from the University of Exeter's Centre for Clinical Neuropsychology Research, describes traumatic brain injury as a "silent epidemic". It is said to occur most frequently among children and young people who have fallen over or been playing sport, as well as those involved in fights or road accidents. The consequences can include loss of memory, with the report citing international research which indicates the level of brain injuries among offenders is much higher than in the general population. A survey of 200 adult male prisoners in Britain found 60% claimed to have suffered a head injury, it notes. The report acknowledges there may be underlying risk factors for brain injury and offending behaviour but says improving treatment and introducing screening for young offenders would deliver significant benefits in terms of reducing crime and saving public money. BBC © 2012