Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 491

By Maureen McCarthy October 30th marked the five-year anniversary of the death of my friend Washoe. Washoe was a wonderful friend. She was confident and self-assured. She was a matriarch, a mother figure not only to her adopted son but to others as well. She was kind and caring, but she didn’t suffer fools. Washoe also happened to be known around the world as the first nonhuman to acquire aspects of a human language, American Sign Language. You see, my friend Washoe was a chimpanzee. Washoe was born somewhere in West Africa around September 1965. Much like the chimpanzees I study here in Uganda, Washoe’s mother cared for her during infancy, nursing her, carrying her, and sharing her sleeping nests with her. That changed when her mother was killed so baby Washoe could be taken from her forest home, then bought by the US Air Force for use in biomedical testing. Washoe was not used in this sort of testing, however. Instead, Drs. Allen and Beatrix Gardner of the University of Nevada chose her among the young chimpanzees at Holloman Aeromedical Laboratory to be cross-fostered. Cross-fostering occurs when a youngster of one species is reared by adults of a different species. In this case, humans raised Washoe exactly as if she were a deaf human child. She learned to brush her teeth, drink from cups, and dress herself, in the same way a human child learns these behaviors. She was also exposed to humans using sign language around her. In fact, humans used only American Sign Language (ASL) to communicate in Washoe’s presence, avoiding spoken English so as to replicate as accurately as possible the learning environment of a young human exposed to sign language. © 2012 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17451 - Posted: 11.05.2012

SAM KIM, Associated Press SEOUL, South Korea (AP) — An elephant in a South Korean zoo is using his trunk to pick up not only food, but also human vocabulary. An international team of scientists confirmed Friday what the Everland Zoo has been saying for years: Their 5.5-ton tusker Koshik has an unusual and possibly unprecedented talent. The 22-year-old Asian elephant can reproduce five Korean words by tucking his trunk inside his mouth to modulate sound, the scientists said in a joint paper published online in Current Biology. They said he may have started imitating human speech because he was lonely. Koshik can reproduce "annyeong" (hello), "anja" (sit down), "aniya" (no), "nuwo" (lie down) and "joa" (good), the paper says. One of the researchers said there is no conclusive evidence that Koshik understands the sounds he makes, although the elephant does respond to words like "anja." Everland Zoo officials in the city of Yongin said Koshik also can imitate "ajik" (not yet), but the researchers haven't confirmed the accomplishment. Koshik is particularly good with vowels, with a rate of similarity of 67 percent, the researchers said. For consonants he scores only 21 percent. Researchers said the clearest scientific evidence that Koshik is deliberately imitating human speech is that the sound frequency of his words matches that of his trainers. © 2012 Hearst Communications Inc.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17446 - Posted: 11.03.2012

A screening test for children starting school that could accurately detect early signs of a persistent stutter is a step closer, experts say. The Wellcome Trust team says a specific speech test accurately predicts whose stutter will persist into their teens. About one in 20 develops a stutter before age five - but just one in 100 stutter as a teen and identifying these children has so far been difficult. Campaigners said it was key for children to be diagnosed early. Stuttering tends to start at about three years old. Four out of five will recover without intervention, often within a couple of years. But for one in five, their stutter will persist and early therapy can be of significant benefit. The researchers, based at University College London, used a test developed in the US called SSI-3 (stuttering severity instrument). In earlier work, they followed eight-year-olds with a stutter into their teens. They found that the SSI-3 test was a reliable indicator of who would still have a stutter and who would recover - while other indicators such as family history, which have been used, were less so. BBC © 2012

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17432 - Posted: 10.29.2012

Ewen Callaway “Who told me to get out?” asked a diver, surfacing from a tank in which a whale named NOC lived. The beluga’s caretakers had heard what sounded like garbled phrases emanating from the enclosure before, and it suddenly dawned on them that the whale might be imitating the voices of his human handlers. The outbursts — described today in Current Biology1 and originally at a 1985 conference — began in 1984 and lasted for about four years, until NOC hit sexual maturity, says Sam Ridgway, a marine biologist at National Marine Mammal Foundation in San Diego, California. He believes that NOC learned to imitate humans by listening to them speak underwater and on the surface. A few animals, including various marine mammals, songbirds and humans, routinely learn and imitate the songs and sounds of others. And Ridgway’s wasn’t the first observation of vocal mimicry in whales. In the 1940s, scientists heard wild belugas (Delphinapterus leucas) making calls that sounded like “children shouting in the distance”2. Decades later, keepers at the Vancouver Aquarium in Canada described a beluga that seemed to utter his name, Lagosi. Ridgway’s team recorded NOC, who is named after the tiny midges colloquially known as no-see-ums found near where he was legally caught by Inuit hunters in Manitoba, Canada, in the late 1970s. His human-like calls are several octaves lower than normal whale calls, a similar pitch to human speech. After training NOC to 'speak' on command, Ridgway’s team determined that he makes the sounds by increasing the pressure of the air that courses through his naval cavities. They think that he then modified the sounds by manipulating the shape of his phonic lips, small vibrating structures that sit above each nasal cavity. © 2012 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17407 - Posted: 10.23.2012

Mo Costandi The growth pattern of long-range connections in the brain predicts how a child’s reading skills will develop, according to research published today in Proceedings of the National Academy of Sciences1. Literacy requires the integration of activity in brain areas involved in vision, hearing and language. These areas are distributed throughout the brain, so efficient communication between them is essential for proficient reading. Jason Yeatman, a neuroscientist at Stanford University in California, and his colleagues studied how the development of reading ability relates to growth in the brain’s white-matter tracts, the bundles of nerve fibres that connect distant regions of the brain. They tested how the reading skills of 55 children aged between 7 and 12 years old developed over a three-year period. There were big differences in reading ability between the children, and these differences persisted — the children who were weak readers relative to their peers at the beginning of the study were still weak three years later. The researchers also scanned the brains of 39 of the children at least three times during the same period, to visualize the growth of two major white-matter tracts: the arcuate fasciculus, which conects the brain's language centres, and the inferior longitudinal fasciculus, which links the language centres with the parts of the brain that process visual information. © 2012 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17350 - Posted: 10.09.2012

By Frances Stead Sellers Carolyn McCaskill remembers exactly when she discovered that she couldn’t understand white people. It was 1968, she was 15 years old, and she and nine other deaf black students had just enrolled in an integrated school for the deaf in Talledega, Ala. When the teacher got up to address the class, McCaskill was lost. “I was dumbfounded,” McCaskill recalls through an interpreter. “I was like, ‘What in the world is going on?’ ” The teacher’s quicksilver hand movements looked little like the sign language McCaskill had grown up using at home with her two deaf siblings and had practiced at the Alabama School for the Negro Deaf and Blind, just a few miles away. It wasn’t a simple matter of people at the new school using unfamiliar vocabularly; they made hand movements for everyday words that looked foreign to McCaskill and her fellow black students. So, McCaskill says, “I put my signs aside.” She learned entirely new signs for such common nouns as “shoe” and “school.” She began to communicate words such as “why” and “don’t know” with one hand instead of two as she and her black friends had always done. She copied the white students who lowered their hands to make the signs for “what for” and “know” closer to their chins than to their foreheads. And she imitated the way white students mouthed words at the same time as they made manual signs for them. Whenever she went home, McCaskill carefully switched back to her old way of communicating. © 1996-2012 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17271 - Posted: 09.18.2012

By Bruce Bower Indo-European languages range throughout Europe and South Asia and even into Iran, yet the roots of this widespread family of tongues have long been controversial. A new study adds support to the proposal that the language family expanded out of Anatolia — what’s now Turkey — between 8,000 and 9,500 years ago, as early farmers sought new land to cultivate. A team led by psychologist Quentin Atkinson of the University of Auckland in New Zealand came to that conclusion by using a mathematical method to calculate the most likely starting point and pattern of geographic spread for a large set of Indo-European languages. The new investigation, published in the Aug. 24 Science, rejects a decades-old idea that Kurgan warriors riding horses and driving chariots out of West Asia’s steppes 5,000 to 6,000 years ago triggered the rise of Indo-European speakers. “Our analysis finds decisive support for an Anatolian origin over a steppe origin of Indo-European languages,” Atkinson says. He and his colleagues generated likely family trees for Indo-European languages, much as geneticists use DNA from different individuals to reconstruct humankind’s genetic evolution. Many linguists, who compare various features of languages to establish their historical connections, consider Atkinson’s statistical approach unreliable (SN: 11/19/11, p. 22). © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17200 - Posted: 08.25.2012

by Douglas Heaven Watch where you look – it can be used to predict what you'll say. A new study shows that it is possible to guess what sentences people will use to describe a scene by tracking their eye movements. Moreno Coco and Frank Keller at the University of Edinburgh, UK, presented 24 volunteers with a series of photo-realistic images depicting indoor scenes such as a hotel reception. They then tracked the sequence of objects that each volunteer looked at after being asked to describe what they saw. Other than being prompted with a keyword, such as "man" or "suitcase", participants were free to describe the scene however they liked. Some typical sentences included "the man is standing in the reception of a hotel" or "the suitcase is on the floor". The order in which a participant's gaze settled on objects in each scene tended to mirror the order of nouns in the sentence used to describe it. "We were surprised there was such a close correlation," says Keller. Given that multiple cognitive processes are involved in sentence formation, Coco says "it is remarkable to find evidence of similarity between speech and visual attention". Word prediction The team used the discovery to see if they could predict what sentences would be used to describe a scene based on eye movement alone. They developed an algorithm that was able to use the eye gazes recorded from the previous experiment to predict the correct sentence from a choice of 576 descriptions. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17087 - Posted: 07.25.2012

By Brian Palmer, A friend recently asked me whether black bears in Appalachia have Southern accents and whether they have trouble understanding black bears raised in Canada or Alaska. Taken literally, those are notions more fit for a Disney movie than a scientist. In a more abstract sense, however, it’s a profound inquiry that fascinates zoologists and psychologists alike. Is communication learned or innate in nonhuman animals? Can geographically distant groups of the same species develop local culture: unique ways of eating, playing and talking to each other? I posed those questions to Darcy Kelley, a Columbia University professor who studies animal communications. “In most species, communication appears to have a genetic basis,” she said. “Regional accents can only develop in the small number of species that learn their vocalizations from others.” Research suggests that the overwhelming majority of animals are born knowing how to speak their species’s language. It doesn’t really matter where those animals are born or raised, because their speech seems to be mostly imprinted in their genetic code. University of Pennsylvania psychologist Bob Seyfarth and biologist Dorothy Cheney conducted a classic experiment on this question. They switched a pair of rhesus macaques and a pair of Japanese macaques shortly after birth, so that the Japanese macaque parents raised the rhesus macaque babies, and the rhesus macaque parents raised the Japanese macaque babies. © 1996-2012 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16973 - Posted: 06.27.2012

Ewen Callaway Ten years ago, psychiatrist David Skuse met a smart, cheery five-year-old boy whose mother was worried because her son had trouble following conversations with other kids at school. He struggled to remember names and often couldn’t summon the words for simple things such as toothpaste. Skuse is an expert on language development at the Institute of Child Health at University College London, but he had never encountered anything like the boy’s condition. His scientific curiosity was piqued when the mother, who is bilingual, mentioned her own difficulties remembering words in English, her native tongue. Her mother, too, had trouble recounting what had happened in television shows she had just seen. “The family history of this word-finding problem needs further investigation,” Skuse noted at the time. About half the members of this family, dubbed JR, share similar language deficits and brain abnormalities. These deficits seem to be inherited across at least four generations, Skuse and his colleagues report today in Proceedings of the Royal Society B1. Identifying the genetic basis of the family’s unique trait — which they call the ‘family problem’ — could help to explain how our brains link words to objects, concepts and ideas. “It’s like that tip-of-the-tongue moment; you’re struggling to find a word,” says Josie Briscoe, a cognitive psychologist at the University of Bristol, UK, and a study co-author. The researchers tested eight JR family members on a number of language and memory tasks to better understand their deficits. © 2012 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16943 - Posted: 06.20.2012

By Morgen E. Peck Like a musician tuning a guitar, adults subconsciously listen to their own voice to tune the pitch, volume and pronunciation of their speech. Young children just learning how to talk, however, do not, a new study suggests. The result offers clues about how kids learn language—­and how parents can help. Past studies have shown that adults use aural feedback to tweak their pronunc­iation. Ewen MacDonald, a professor at the Center for Applied Hearing Research at the Technical University of Denmark, decided to see if toddlers could do this as well. He had adults and children play a video game in which they guided the actions of a robot by repeating the word “bed.” Through headphones, the players heard their own voice every time they spoke—but with the frequency spectrum shifted so they heard “bad” instead of “bed.” MacDonald found that adults and four-year-old kids tried to com­pensate for the error by pronouncing the word more like “bid,” but two-year-olds never budged from “bed,” suggesting that they were not using auditory feedback to monitor their speech. Although the toddlers may have been suppressing the feedback mechanism, MacDonald thinks they might not start listening to themselves until they are older. If that is the case, they may rely heavily on feedback from adults to gauge how they sound. In­deed, most parents and caregivers naturally repeat the words toddlers say, as praise and encouragement. “I think the real take-home message is that social interaction is important for the development of speech,” MacDonald says. “The general act of talking and interacting with the child in a normal way is the key.” © 2012 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16871 - Posted: 06.05.2012

Analysis by Jennifer Viegas Monkeys smack their lips during friendly face-to-face encounters, and now a new study says that this seemingly simple behavior may be tied to human speech. Previously experts thought the evolutionary origins of human speech came from primate vocalizations, such as chimpanzee hoots or monkey coos. But now scientists suspect that rapid, controlled movements of the tongue, lips and jaw -- all of which are needed for lip smacking -- were more important to the emergence of speech. For the study, published in the latest Current Biology, W. Tecumseh Fitch and colleagues used x-ray movies to investigate lip-smacking gestures in macaque monkeys. Mother monkeys do this a lot with their infants, so it seems to be kind of an endearing thing, perhaps like humans going goo-goo-goo in a baby's face while playing. (Monkeys will also vibrate their lips to make a raspberry sound.) Monkey lip-smacking, however, makes a quiet sound, similar to "p p p p". It's not accompanied by phonation, meaning sound produced by vocal cord vibration in the larynx. Fitch, who is head of the Department of Cognitive Biology at the University of Vienna, and his team determined that lip-smacking is a complex behavior that requires rapid, coordinated movements of the lips, jaw, tongue and the hyoid bone (which provides the supporting skeleton for the larynx and tongue). © 2012 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16869 - Posted: 06.02.2012

by Catherine de Lange WHEN I was just a newborn baby, my mother gazed down at me in her hospital bed and did something that was to permanently change the way my brain developed. Something that would make me better at learning, multitasking and solving problems. Eventually, it might even protect my brain against the ravages of old age. Her trick? She started speaking to me in French. At the time, my mother had no idea that her actions would give me a cognitive boost. She is French and my father English, so they simply felt it made sense to raise me and my brothers as bilingual. Yet as I've grown up, a mass of research has emerged to suggest that speaking two languages may have profoundly affected the way I think. Cognitive enhancement is just the start. According to some studies, my memories, values, even my personality, may change depending on which language I happen to be speaking. It is almost as if the bilingual brain houses two separate minds. All of which highlights the fundamental role of language in human thought. "Bilingualism is quite an extraordinary microscope into the human brain," says neuroscientist Laura Ann Petitto of Gallaudet University in Washington DC. The view of bilingualism has not always been this rosy. For many parents like mine, the decision to raise children speaking two languages was controversial. Since at least the 19th century, educators warned that it would confuse the child, making them unable to learn either language properly. At best, they thought the child would become a jack-of-all-trades and master of none. At worst, they suspected it might hinder other aspects of development, resulting in a lower IQ. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16765 - Posted: 05.08.2012

By JAMES GORMAN First things first: The hyrax is not the Lorax. And it does not speak for the trees. It sings, on its own behalf. The hyrax is a bit Seussian, however. It looks something like a rabbit, something like a woodchuck. Its closest living relatives are elephants, manatees and dugongs. And male rock hyraxes have complex songs like those of birds, in the sense that males will go on for 5 or 10 minutes at a stretch, apparently advertising themselves. One might have expected that the hyrax would have some unusual qualities — the animals’ feet, if you know how to look at them, resemble elephants’ toes, the experts say. And their visible front teeth are actually very small tusks. But Arik Kershenbaum and colleagues at the University of Haifa and Tel Aviv University have found something more surprising. Hyraxes’ songs have something rarely found in mammals: syntax that varies according to where the hyraxes live, geographical dialects in how they put their songs together. The research was published online Wednesday in The Proceedings of the Royal Society B. Bird songs show syntax, this ordering of song components in different ways, but very few mammals make such orderly, arranged sounds. Whales, bats and some primates show syntax in their vocalizations, but nobody really expected such sophistication from the hyrax, and it was thought that the selection of sounds in the songs were relatively random. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 8: Hormones and Sex; Chapter 15: Language and Our Divided Brain
Link ID: 16685 - Posted: 04.21.2012

by Erin Loury Monkeys banging on typewriters might never reproduce the works of Shakespeare, but they may be closer to reading Hamlet than we thought. Scientists have trained baboons to distinguish English words from similar-looking nonsense words by recognizing common arrangements of letters. The findings indicate that visual word recognition, the most basic step of reading, can be learned without any knowledge of spoken language. The study builds on the idea that when humans read, our brains first have to recognize individual letters, as well as their order. "We're actually reading words much like we identify any kind of visual object, like we identify chairs and tables," says study author Jonathan Grainger, a cognitive psychologist at France's National Center for Scientific Research, and Aix-Marseille University in Marseille, France. Our brains construct words from an assembly of letters like they recognize tables as a surface connected to four legs, Grainger says. Much of the current reading research has stressed that readers first need to have familiarity with spoken language, so they can connect sounds (or hand signs for the hearing-impaired) with the letters they see. Grainger and his colleagues wanted to test whether it's possible to learn the letter patterns of words without any idea of what they mean or how they sound—that is, whether a monkey could do it. The scientists used a unique testing facility, consisting of a trailer with computers set up next to a baboon enclosure, which the animals could enter at will and perform trials on the touch-screen computers for as long as they pleased. The computers cued up the appropriate test for each of the six study baboons using microchips in their arms. When letters appeared on the monitor, the baboons got wheat rewards for touching the correct shape on the screen: an oval on the right of the screen if the word was real, and a cross on the left if it was nonsense (see video). © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16647 - Posted: 04.14.2012

A personality profile marked by overly gregarious yet anxious behavior is rooted in abnormal development of a circuit hub buried deep in the front center of the brain, say scientists at the National Institutes of Health. They used three different types of brain imaging to pinpoint the suspect brain area in people with Williams syndrome, a rare genetic disorder characterized by these behaviors. Matching the scans to scores on a personality rating scale revealed that the more an individual with Williams syndrome showed these personality/temperament traits, the more abnormalities there were in the brain structure, called the insula. "Scans of the brain's tissue composition, wiring, and activity produced converging evidence of genetically-caused abnormalities in the structure and function of the front part of the insula and in its connectivity to other brain areas in the circuit," explained Karen Berman, M.D., of the NIH's National Institute of Mental Health (NIMH). Berman, Drs. Mbemda Jabbi, Shane Kippenhan, and colleagues, report on their imaging study in Williams syndrome online in the journal Proceedings of the National Academy of Sciences. Williams syndrome is caused by the deletion of some 28 genes, many involved in brain development and behavior, in a particular section of chromosome 7. Among deficits characteristic of the syndrome are a lack of visual-spatial ability – such as is required to assemble a puzzle — and a tendency to be overly-friendly with people, while overly anxious about non-social matters, such as spiders or heights. Many people with the disorder are also mentally challenged and learning disabled, but some have normal IQs.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16569 - Posted: 03.24.2012

By Emily Sohn In his spare time, an otherwise ordinary 16-year old boy from New York taught himself Hebrew, Arabic, Russian, Swahili, and a dozen other languages, the New York Times reported last week. And even though it's not entirely clear how close to fluent Timothy Doner is in any of his studied languages, the high school sophomore -- along with other polyglots like him -- are certainly different from most Americans, who speak one or maybe two languages. That raises the question: Is there something unique about certain brains, which allows some people to speak and understand so many more languages than the rest of us? The answer, experts say, seems to be yes, no and it's complicated. For some people, genes may prime the brain to be good at language learning, according to some new research. And studies are just starting to pinpoint a few brain regions that are extra-large or extra-efficient in people who excel at languages. For others, though, it's more a matter of being determined and motivated enough to put in the hours and hard work necessary to learn new ways of communicating. "Kids do well in what they like," said Michael Paradis, a neurolinguist at McGill University in Montreal, who compared language learning to piano, sports or anything else that requires discipline. "Kids who love math do well in math. He loves languages and is doing well in languages." © 2012 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16547 - Posted: 03.20.2012

By YUDHIJIT BHATTACHARJEE SPEAKING two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age. This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long considered a second language to be an interference, cognitively speaking, that hindered a child’s academic and intellectual development. They were not wrong about the interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles. Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one marked with a blue square and the other marked with a red circle. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16542 - Posted: 03.19.2012

By ANNIE MURPHY PAUL Brain scans are revealing what happens in our heads when we read a detailed description, an evocative metaphor or an emotional exchange between characters. Stories, this research is showing, stimulate the brain and even change how we act in life. Researchers have long known that the “classical” language regions, like Broca’s area and Wernicke’s area, are involved in how the brain interprets written words. What scientists have come to realize in the last few years is that narratives activate many other parts of our brains as well, suggesting why the experience of reading can feel so alive. Words like “lavender,” “cinnamon” and “soap,” for example, elicit a response not only from the language-processing areas of our brains, but also those devoted to dealing with smells. In a 2006 study published in the journal NeuroImage, researchers in Spain asked participants to read words with strong odor associations, along with neutral words, while their brains were being scanned by a functional magnetic resonance imaging (fMRI) machine. When subjects looked at the Spanish words for “perfume” and “coffee,” their primary olfactory cortex lit up; when they saw the words that mean “chair” and “key,” this region remained dark. The way the brain handles metaphors has also received extensive study; some scientists have contended that figures of speech like “a rough day” are so familiar that they are treated simply as words and no more. Last month, however, a team of researchers from Emory University reported in Brain & Language that when subjects in their laboratory read a metaphor involving texture, the sensory cortex, responsible for perceiving texture through touch, became active. Metaphors like “The singer had a velvet voice” and “He had leathery hands” roused the sensory cortex, while phrases matched for meaning, like “The singer had a pleasing voice” and “He had strong hands,” did not. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 16536 - Posted: 03.19.2012

By Karen Weintraub Marjorie Nicholas, associate chairwoman of the department of communication sciences and disorders at the MGH Institute of Health Professions, is an expert in the language disorder aphasia, and has been treating former Arizona Representative Gabrielle Giffords, who has the condition. Q. What is aphasia and how do people get it? A. Aphasia affects your ability to speak, to understand language, to read and to write. It’s extremely variable. Some might have a severe problem in expression but really pretty good understanding of spoken language, and somebody else might have a very different profile. Typically, people get aphasia by having a stroke that damages parts of the left side of the brain, which is dominant for language. People can also get aphasia from other types of injuries like head injuries, or in Gabby’s case, a gunshot wound to the head that damages that same language area of the brain. It is more common than people realize. Q. How does Giffords fit into the spectrum of symptoms you’ve described? A. Her understanding of spoken language is really very good. Her difficulties are more in the expression. Q. You obviously can’t violate her privacy, but what can you say about your work with Giffords? A. I worked with her for two weeks last fall, and [colleague Nancy Helm-Estabrooks of the University of North Carolina] and I are planning to work with her again for a week this spring. We’ll need to see where she is again. I’m assuming she will have continued to improve and we’ll want to keep her going on that track. © 2012 NY Times Co

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16444 - Posted: 02.28.2012