Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 496

By Breanna Draxler Infants are known for their impressive ability to learn language, which most scientists say kicks in somewhere around the six-month mark. But a new study indicates that language recognition may begin even earlier, while the baby is still in the womb. Using a creative means of measurement, researchers found that babies could already recognize their mother tongue by the time they left their mothers’ bodies. The researchers tested American and Swedish newborns between seven hours and three days old. Each baby was given a pacifier hooked up to a computer. When the baby sucked on the pacifier, it triggered the computer to produce a vowel sound—sometimes in English and sometimes in Swedish. The vowel sound was repeated until the baby stopped sucking. When the baby resumed sucking, a new vowel sound would start. The sucking was used as a metric to determine the babies’ interest in each vowel sound. More interest meant more sucks, according to the study soon to be published in Acta Paediatrica. In both countries, babies sucked on the pacifier longer when they heard foreign vowel sounds as compared to those of their mom’s native language. The researchers suggest that this is because the babies already recognize the vowels from their mothers and were keen to learn new ones. Hearing develops in a baby’s brain at around the 30th week of pregnancy, which leaves the last 10 weeks of gestation for babies to put that newfound ability to work. Baby brains are quick to learn, so a better understanding of these mechanisms may help researchers figure out how to improve the learning process for the rest of us.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17648 - Posted: 01.05.2013

By Ben Thomas They called him “Diogenes the Cynic,” because “cynic” meant “dog-like,” and he had a habit of basking naked on the lawn while his fellow philosophers talked on the porch. While they debated the mysteries of the cosmos, Diogenes preferred to soak up some rays – some have called him the Jimmy Buffett of ancient Greece. Anyway, one morning, the great philosopher Plato had a stroke of insight. He caught everyone’s attention, gathered a crowd around him, and announced his deduction: “Man is defined as a hairless, featherless, two-legged animal!” Whereupon Diogenes abruptly leaped up from the lawn, dashed off to the marketplace, and burst back onto the porch carrying a plucked chicken – which he held aloft and shouted, “Behold: I give you… Man!” I’m sure Plato was less than thrilled at this stunt, but the story reminds us that these early philosophers were still hammering out the most basic tenets of the science we now know as taxonomy: The grouping of objects from the world into abstract categories. This technique of chopping up reality wasn’t invented in ancient Greece, though. In fact, as a recent study shows, it’s fundamental to the way our brains work. At the most basic level, we don’t really perceive separate objects at all – we perceive our nervous systems’ responses to a boundless flow of electromagnetic waves and biochemical reactions. Our brains slot certain neural response patterns into sensory pathways we call “sight,” “smell” and so on – but abilities like synesthesia and echolocation show that even the boundaries between our senses can be blurry. © 2012 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 17640 - Posted: 12.27.2012

Philip Ball Learning to read Chinese might seem daunting to Westerners used to an alphabetic script, but brain scans of French and Chinese native speakers show that people harness the same brain centres for reading across cultures. The findings are published today in the Proceedings of the National Academy of Sciences1. Reading involves two neural systems: one that recognizes the shape of the word and a second that asseses the physical movements used to make the marks on a page, says study leader Stanislas Dehaene, a cognitive neuroscientist the National Institute of Health and Medical Research in Gif-sur-Yvette, France. But it has been unclear whether the brain networks responsible for reading are universal or culturally distinct. Previous studies have suggested that alphabetic writing systems (such as French) and logographic ones (such as Chinese, in which single characters represent entire words) writing systems might engage different networks in the brain. To explore this question, Dehaene and his colleagues used functional magnetic resonance imaging to examine brain activity in Chinese and French people while they read their native languages. The researchers found that both Chinese and French people use the visual and gestural systems while reading their native language, but with different emphases that reflect the different demands of each language. © 2012 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17540 - Posted: 11.27.2012

By Bruce Bower MINNEAPOLIS — Baboons use the order of regularly appearing letter pairs to tell words from nonwords, new evidence suggests. Psychologist Jonathan Grainger of the University of Aix-Marseille reported earlier this year that baboons can learn to tell real four-letter words from nonsense words (SN: 5/5/12, p. 5). But whether these animals detect signature letter combinations that enable their impressive word feats has been tough to demonstrate. Monkeys that previously learned to excel on this task are more likely to mistake nonwords created by reversing two letters of a word they already recognize as real, much as literate people do, Grainger reported November 16 at the Psychonomics Society annual meeting. “Letters played a role in baboons’ word knowledge,” Grainger concluded. “This is a starting point for determining how they discriminate words from nonwords.” Grainger’s team tested the six baboons in their original investigation. Some of the monkeys had previously learned to recognize many more words than others. In new trials, the best word identifiers made more errors than their less successful peers when shown nonwords that differed from known words by a reversed letter combination, such as WSAP instead of WASP and KTIE instead of KITE. Grainger’s team fed the same series of words and nonwords into a computer simulation of the experiment. The computer model best reproduced the animals’ learning curves when endowed with a capacity for tracking letter combinations. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17511 - Posted: 11.20.2012

by Douglas Heaven MEANINGS of words can be hard to locate when they are on the tip of your tongue, let alone in the brain. Now, for the first time, patterns of brain activity have been matched with the meanings of specific words. The discovery is a step forward in our attempts to read thoughts from brain activity alone, and could help doctors identify awareness in people with brain damage. Machines can already eavesdrop on our brains to distinguish which words we are listening toMovie Camera, but Joao Correia at Maastricht University in the Netherlands wanted to get beyond the brain's representation of the words themselves and identify the activity that underlies their meaning. Somewhere in the brain, he hypothesised, written and spoken representations of words are integrated and meaning is processed. "We wanted to find the hub," he says. To begin the hunt, Correia and his colleagues used an fMRI scanner to study the brain activity of eight bilingual volunteers as they listened to the names of four animals, bull, horse, shark and duck, spoken in English. The team monitored patterns of neural activity in the left anterior temporal cortex - known to be involved in a range of semantic tasks - and trained an algorithm to identify which word a participant had heard based on the pattern of activity. Since the team wanted to pinpoint activity related to meaning, they picked words that were as similar as possible - all four contain one syllable and belong to the concept of animals. They also chose words that would have been learned at roughly the same time of life and took a similar time for the brain to process. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 17502 - Posted: 11.17.2012

By Maureen McCarthy October 30th marked the five-year anniversary of the death of my friend Washoe. Washoe was a wonderful friend. She was confident and self-assured. She was a matriarch, a mother figure not only to her adopted son but to others as well. She was kind and caring, but she didn’t suffer fools. Washoe also happened to be known around the world as the first nonhuman to acquire aspects of a human language, American Sign Language. You see, my friend Washoe was a chimpanzee. Washoe was born somewhere in West Africa around September 1965. Much like the chimpanzees I study here in Uganda, Washoe’s mother cared for her during infancy, nursing her, carrying her, and sharing her sleeping nests with her. That changed when her mother was killed so baby Washoe could be taken from her forest home, then bought by the US Air Force for use in biomedical testing. Washoe was not used in this sort of testing, however. Instead, Drs. Allen and Beatrix Gardner of the University of Nevada chose her among the young chimpanzees at Holloman Aeromedical Laboratory to be cross-fostered. Cross-fostering occurs when a youngster of one species is reared by adults of a different species. In this case, humans raised Washoe exactly as if she were a deaf human child. She learned to brush her teeth, drink from cups, and dress herself, in the same way a human child learns these behaviors. She was also exposed to humans using sign language around her. In fact, humans used only American Sign Language (ASL) to communicate in Washoe’s presence, avoiding spoken English so as to replicate as accurately as possible the learning environment of a young human exposed to sign language. © 2012 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17451 - Posted: 11.05.2012

SAM KIM, Associated Press SEOUL, South Korea (AP) — An elephant in a South Korean zoo is using his trunk to pick up not only food, but also human vocabulary. An international team of scientists confirmed Friday what the Everland Zoo has been saying for years: Their 5.5-ton tusker Koshik has an unusual and possibly unprecedented talent. The 22-year-old Asian elephant can reproduce five Korean words by tucking his trunk inside his mouth to modulate sound, the scientists said in a joint paper published online in Current Biology. They said he may have started imitating human speech because he was lonely. Koshik can reproduce "annyeong" (hello), "anja" (sit down), "aniya" (no), "nuwo" (lie down) and "joa" (good), the paper says. One of the researchers said there is no conclusive evidence that Koshik understands the sounds he makes, although the elephant does respond to words like "anja." Everland Zoo officials in the city of Yongin said Koshik also can imitate "ajik" (not yet), but the researchers haven't confirmed the accomplishment. Koshik is particularly good with vowels, with a rate of similarity of 67 percent, the researchers said. For consonants he scores only 21 percent. Researchers said the clearest scientific evidence that Koshik is deliberately imitating human speech is that the sound frequency of his words matches that of his trainers. © 2012 Hearst Communications Inc.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17446 - Posted: 11.03.2012

A screening test for children starting school that could accurately detect early signs of a persistent stutter is a step closer, experts say. The Wellcome Trust team says a specific speech test accurately predicts whose stutter will persist into their teens. About one in 20 develops a stutter before age five - but just one in 100 stutter as a teen and identifying these children has so far been difficult. Campaigners said it was key for children to be diagnosed early. Stuttering tends to start at about three years old. Four out of five will recover without intervention, often within a couple of years. But for one in five, their stutter will persist and early therapy can be of significant benefit. The researchers, based at University College London, used a test developed in the US called SSI-3 (stuttering severity instrument). In earlier work, they followed eight-year-olds with a stutter into their teens. They found that the SSI-3 test was a reliable indicator of who would still have a stutter and who would recover - while other indicators such as family history, which have been used, were less so. BBC © 2012

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17432 - Posted: 10.29.2012

Ewen Callaway “Who told me to get out?” asked a diver, surfacing from a tank in which a whale named NOC lived. The beluga’s caretakers had heard what sounded like garbled phrases emanating from the enclosure before, and it suddenly dawned on them that the whale might be imitating the voices of his human handlers. The outbursts — described today in Current Biology1 and originally at a 1985 conference — began in 1984 and lasted for about four years, until NOC hit sexual maturity, says Sam Ridgway, a marine biologist at National Marine Mammal Foundation in San Diego, California. He believes that NOC learned to imitate humans by listening to them speak underwater and on the surface. A few animals, including various marine mammals, songbirds and humans, routinely learn and imitate the songs and sounds of others. And Ridgway’s wasn’t the first observation of vocal mimicry in whales. In the 1940s, scientists heard wild belugas (Delphinapterus leucas) making calls that sounded like “children shouting in the distance”2. Decades later, keepers at the Vancouver Aquarium in Canada described a beluga that seemed to utter his name, Lagosi. Ridgway’s team recorded NOC, who is named after the tiny midges colloquially known as no-see-ums found near where he was legally caught by Inuit hunters in Manitoba, Canada, in the late 1970s. His human-like calls are several octaves lower than normal whale calls, a similar pitch to human speech. After training NOC to 'speak' on command, Ridgway’s team determined that he makes the sounds by increasing the pressure of the air that courses through his naval cavities. They think that he then modified the sounds by manipulating the shape of his phonic lips, small vibrating structures that sit above each nasal cavity. © 2012 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17407 - Posted: 10.23.2012

Mo Costandi The growth pattern of long-range connections in the brain predicts how a child’s reading skills will develop, according to research published today in Proceedings of the National Academy of Sciences1. Literacy requires the integration of activity in brain areas involved in vision, hearing and language. These areas are distributed throughout the brain, so efficient communication between them is essential for proficient reading. Jason Yeatman, a neuroscientist at Stanford University in California, and his colleagues studied how the development of reading ability relates to growth in the brain’s white-matter tracts, the bundles of nerve fibres that connect distant regions of the brain. They tested how the reading skills of 55 children aged between 7 and 12 years old developed over a three-year period. There were big differences in reading ability between the children, and these differences persisted — the children who were weak readers relative to their peers at the beginning of the study were still weak three years later. The researchers also scanned the brains of 39 of the children at least three times during the same period, to visualize the growth of two major white-matter tracts: the arcuate fasciculus, which conects the brain's language centres, and the inferior longitudinal fasciculus, which links the language centres with the parts of the brain that process visual information. © 2012 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17350 - Posted: 10.09.2012

By Frances Stead Sellers Carolyn McCaskill remembers exactly when she discovered that she couldn’t understand white people. It was 1968, she was 15 years old, and she and nine other deaf black students had just enrolled in an integrated school for the deaf in Talledega, Ala. When the teacher got up to address the class, McCaskill was lost. “I was dumbfounded,” McCaskill recalls through an interpreter. “I was like, ‘What in the world is going on?’ ” The teacher’s quicksilver hand movements looked little like the sign language McCaskill had grown up using at home with her two deaf siblings and had practiced at the Alabama School for the Negro Deaf and Blind, just a few miles away. It wasn’t a simple matter of people at the new school using unfamiliar vocabularly; they made hand movements for everyday words that looked foreign to McCaskill and her fellow black students. So, McCaskill says, “I put my signs aside.” She learned entirely new signs for such common nouns as “shoe” and “school.” She began to communicate words such as “why” and “don’t know” with one hand instead of two as she and her black friends had always done. She copied the white students who lowered their hands to make the signs for “what for” and “know” closer to their chins than to their foreheads. And she imitated the way white students mouthed words at the same time as they made manual signs for them. Whenever she went home, McCaskill carefully switched back to her old way of communicating. © 1996-2012 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17271 - Posted: 09.18.2012

By Bruce Bower Indo-European languages range throughout Europe and South Asia and even into Iran, yet the roots of this widespread family of tongues have long been controversial. A new study adds support to the proposal that the language family expanded out of Anatolia — what’s now Turkey — between 8,000 and 9,500 years ago, as early farmers sought new land to cultivate. A team led by psychologist Quentin Atkinson of the University of Auckland in New Zealand came to that conclusion by using a mathematical method to calculate the most likely starting point and pattern of geographic spread for a large set of Indo-European languages. The new investigation, published in the Aug. 24 Science, rejects a decades-old idea that Kurgan warriors riding horses and driving chariots out of West Asia’s steppes 5,000 to 6,000 years ago triggered the rise of Indo-European speakers. “Our analysis finds decisive support for an Anatolian origin over a steppe origin of Indo-European languages,” Atkinson says. He and his colleagues generated likely family trees for Indo-European languages, much as geneticists use DNA from different individuals to reconstruct humankind’s genetic evolution. Many linguists, who compare various features of languages to establish their historical connections, consider Atkinson’s statistical approach unreliable (SN: 11/19/11, p. 22). © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17200 - Posted: 08.25.2012

by Douglas Heaven Watch where you look – it can be used to predict what you'll say. A new study shows that it is possible to guess what sentences people will use to describe a scene by tracking their eye movements. Moreno Coco and Frank Keller at the University of Edinburgh, UK, presented 24 volunteers with a series of photo-realistic images depicting indoor scenes such as a hotel reception. They then tracked the sequence of objects that each volunteer looked at after being asked to describe what they saw. Other than being prompted with a keyword, such as "man" or "suitcase", participants were free to describe the scene however they liked. Some typical sentences included "the man is standing in the reception of a hotel" or "the suitcase is on the floor". The order in which a participant's gaze settled on objects in each scene tended to mirror the order of nouns in the sentence used to describe it. "We were surprised there was such a close correlation," says Keller. Given that multiple cognitive processes are involved in sentence formation, Coco says "it is remarkable to find evidence of similarity between speech and visual attention". Word prediction The team used the discovery to see if they could predict what sentences would be used to describe a scene based on eye movement alone. They developed an algorithm that was able to use the eye gazes recorded from the previous experiment to predict the correct sentence from a choice of 576 descriptions. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17087 - Posted: 07.25.2012

By Brian Palmer, A friend recently asked me whether black bears in Appalachia have Southern accents and whether they have trouble understanding black bears raised in Canada or Alaska. Taken literally, those are notions more fit for a Disney movie than a scientist. In a more abstract sense, however, it’s a profound inquiry that fascinates zoologists and psychologists alike. Is communication learned or innate in nonhuman animals? Can geographically distant groups of the same species develop local culture: unique ways of eating, playing and talking to each other? I posed those questions to Darcy Kelley, a Columbia University professor who studies animal communications. “In most species, communication appears to have a genetic basis,” she said. “Regional accents can only develop in the small number of species that learn their vocalizations from others.” Research suggests that the overwhelming majority of animals are born knowing how to speak their species’s language. It doesn’t really matter where those animals are born or raised, because their speech seems to be mostly imprinted in their genetic code. University of Pennsylvania psychologist Bob Seyfarth and biologist Dorothy Cheney conducted a classic experiment on this question. They switched a pair of rhesus macaques and a pair of Japanese macaques shortly after birth, so that the Japanese macaque parents raised the rhesus macaque babies, and the rhesus macaque parents raised the Japanese macaque babies. © 1996-2012 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16973 - Posted: 06.27.2012

Ewen Callaway Ten years ago, psychiatrist David Skuse met a smart, cheery five-year-old boy whose mother was worried because her son had trouble following conversations with other kids at school. He struggled to remember names and often couldn’t summon the words for simple things such as toothpaste. Skuse is an expert on language development at the Institute of Child Health at University College London, but he had never encountered anything like the boy’s condition. His scientific curiosity was piqued when the mother, who is bilingual, mentioned her own difficulties remembering words in English, her native tongue. Her mother, too, had trouble recounting what had happened in television shows she had just seen. “The family history of this word-finding problem needs further investigation,” Skuse noted at the time. About half the members of this family, dubbed JR, share similar language deficits and brain abnormalities. These deficits seem to be inherited across at least four generations, Skuse and his colleagues report today in Proceedings of the Royal Society B1. Identifying the genetic basis of the family’s unique trait — which they call the ‘family problem’ — could help to explain how our brains link words to objects, concepts and ideas. “It’s like that tip-of-the-tongue moment; you’re struggling to find a word,” says Josie Briscoe, a cognitive psychologist at the University of Bristol, UK, and a study co-author. The researchers tested eight JR family members on a number of language and memory tasks to better understand their deficits. © 2012 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16943 - Posted: 06.20.2012

By Morgen E. Peck Like a musician tuning a guitar, adults subconsciously listen to their own voice to tune the pitch, volume and pronunciation of their speech. Young children just learning how to talk, however, do not, a new study suggests. The result offers clues about how kids learn language—­and how parents can help. Past studies have shown that adults use aural feedback to tweak their pronunc­iation. Ewen MacDonald, a professor at the Center for Applied Hearing Research at the Technical University of Denmark, decided to see if toddlers could do this as well. He had adults and children play a video game in which they guided the actions of a robot by repeating the word “bed.” Through headphones, the players heard their own voice every time they spoke—but with the frequency spectrum shifted so they heard “bad” instead of “bed.” MacDonald found that adults and four-year-old kids tried to com­pensate for the error by pronouncing the word more like “bid,” but two-year-olds never budged from “bed,” suggesting that they were not using auditory feedback to monitor their speech. Although the toddlers may have been suppressing the feedback mechanism, MacDonald thinks they might not start listening to themselves until they are older. If that is the case, they may rely heavily on feedback from adults to gauge how they sound. In­deed, most parents and caregivers naturally repeat the words toddlers say, as praise and encouragement. “I think the real take-home message is that social interaction is important for the development of speech,” MacDonald says. “The general act of talking and interacting with the child in a normal way is the key.” © 2012 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16871 - Posted: 06.05.2012

Analysis by Jennifer Viegas Monkeys smack their lips during friendly face-to-face encounters, and now a new study says that this seemingly simple behavior may be tied to human speech. Previously experts thought the evolutionary origins of human speech came from primate vocalizations, such as chimpanzee hoots or monkey coos. But now scientists suspect that rapid, controlled movements of the tongue, lips and jaw -- all of which are needed for lip smacking -- were more important to the emergence of speech. For the study, published in the latest Current Biology, W. Tecumseh Fitch and colleagues used x-ray movies to investigate lip-smacking gestures in macaque monkeys. Mother monkeys do this a lot with their infants, so it seems to be kind of an endearing thing, perhaps like humans going goo-goo-goo in a baby's face while playing. (Monkeys will also vibrate their lips to make a raspberry sound.) Monkey lip-smacking, however, makes a quiet sound, similar to "p p p p". It's not accompanied by phonation, meaning sound produced by vocal cord vibration in the larynx. Fitch, who is head of the Department of Cognitive Biology at the University of Vienna, and his team determined that lip-smacking is a complex behavior that requires rapid, coordinated movements of the lips, jaw, tongue and the hyoid bone (which provides the supporting skeleton for the larynx and tongue). © 2012 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16869 - Posted: 06.02.2012

by Catherine de Lange WHEN I was just a newborn baby, my mother gazed down at me in her hospital bed and did something that was to permanently change the way my brain developed. Something that would make me better at learning, multitasking and solving problems. Eventually, it might even protect my brain against the ravages of old age. Her trick? She started speaking to me in French. At the time, my mother had no idea that her actions would give me a cognitive boost. She is French and my father English, so they simply felt it made sense to raise me and my brothers as bilingual. Yet as I've grown up, a mass of research has emerged to suggest that speaking two languages may have profoundly affected the way I think. Cognitive enhancement is just the start. According to some studies, my memories, values, even my personality, may change depending on which language I happen to be speaking. It is almost as if the bilingual brain houses two separate minds. All of which highlights the fundamental role of language in human thought. "Bilingualism is quite an extraordinary microscope into the human brain," says neuroscientist Laura Ann Petitto of Gallaudet University in Washington DC. The view of bilingualism has not always been this rosy. For many parents like mine, the decision to raise children speaking two languages was controversial. Since at least the 19th century, educators warned that it would confuse the child, making them unable to learn either language properly. At best, they thought the child would become a jack-of-all-trades and master of none. At worst, they suspected it might hinder other aspects of development, resulting in a lower IQ. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16765 - Posted: 05.08.2012

By JAMES GORMAN First things first: The hyrax is not the Lorax. And it does not speak for the trees. It sings, on its own behalf. The hyrax is a bit Seussian, however. It looks something like a rabbit, something like a woodchuck. Its closest living relatives are elephants, manatees and dugongs. And male rock hyraxes have complex songs like those of birds, in the sense that males will go on for 5 or 10 minutes at a stretch, apparently advertising themselves. One might have expected that the hyrax would have some unusual qualities — the animals’ feet, if you know how to look at them, resemble elephants’ toes, the experts say. And their visible front teeth are actually very small tusks. But Arik Kershenbaum and colleagues at the University of Haifa and Tel Aviv University have found something more surprising. Hyraxes’ songs have something rarely found in mammals: syntax that varies according to where the hyraxes live, geographical dialects in how they put their songs together. The research was published online Wednesday in The Proceedings of the Royal Society B. Bird songs show syntax, this ordering of song components in different ways, but very few mammals make such orderly, arranged sounds. Whales, bats and some primates show syntax in their vocalizations, but nobody really expected such sophistication from the hyrax, and it was thought that the selection of sounds in the songs were relatively random. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 8: Hormones and Sex; Chapter 15: Language and Our Divided Brain
Link ID: 16685 - Posted: 04.21.2012

by Erin Loury Monkeys banging on typewriters might never reproduce the works of Shakespeare, but they may be closer to reading Hamlet than we thought. Scientists have trained baboons to distinguish English words from similar-looking nonsense words by recognizing common arrangements of letters. The findings indicate that visual word recognition, the most basic step of reading, can be learned without any knowledge of spoken language. The study builds on the idea that when humans read, our brains first have to recognize individual letters, as well as their order. "We're actually reading words much like we identify any kind of visual object, like we identify chairs and tables," says study author Jonathan Grainger, a cognitive psychologist at France's National Center for Scientific Research, and Aix-Marseille University in Marseille, France. Our brains construct words from an assembly of letters like they recognize tables as a surface connected to four legs, Grainger says. Much of the current reading research has stressed that readers first need to have familiarity with spoken language, so they can connect sounds (or hand signs for the hearing-impaired) with the letters they see. Grainger and his colleagues wanted to test whether it's possible to learn the letter patterns of words without any idea of what they mean or how they sound—that is, whether a monkey could do it. The scientists used a unique testing facility, consisting of a trailer with computers set up next to a baboon enclosure, which the animals could enter at will and perform trials on the touch-screen computers for as long as they pleased. The computers cued up the appropriate test for each of the six study baboons using microchips in their arms. When letters appeared on the monitor, the baboons got wheat rewards for touching the correct shape on the screen: an oval on the right of the screen if the word was real, and a cross on the left if it was nonsense (see video). © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16647 - Posted: 04.14.2012