Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 498

by Aviva Rutkin "He moistened his lips uneasily." It sounds like a cheap romance novel, but this line is actually lifted from quite a different type of prose: a neuroscience study. Along with other sentences, including "Have you got enough blankets?" and "And what eyes they were", it was used to build the first map of how the brain processes the building blocks of speech – distinct units of sound known as phonemes. The map reveals that the brain devotes distinct areas to processing different types of phonemes. It might one day help efforts to read off what someone is hearing from a brain scan. "If you could see the brain of someone who is listening to speech, there is a rapid activation of different areas, each responding specifically to a particular feature the speaker is producing," says Nima Mesgarani, an electrical engineer at Columbia University in New York City. Snakes on a brain To build the map, Mesgarani's team turned to a group of volunteers who already had electrodes implanted in their brains as part of an unrelated treatment for epilepsy. The invasive electrodes sit directly on the surface of the brain, providing a unique and detailed view of neural activity. The researchers got the volunteers to listen to hundreds of snippets of speech taken from a database designed to provide an efficient way to cycle through a wide variety of phonemes, while monitoring the signals from the electrodes. As well as those already mentioned, sentences ran the gamut from "It had gone like clockwork" to "Junior, what on Earth's the matter with you?" to "Nobody likes snakes". © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19196 - Posted: 02.01.2014

by Laura Sanders Growing up, I loved it when my parents read aloud the stories of the Berenstain Bears living in their treehouse. So while I was pregnant with my daughter, I imagined lots of cuddly quiet time with her in a comfy chair, reading about the latest adventures of Brother and Sister. Of course, reality soon let me know just how ridiculous that idea was. My newborn couldn’t see more than a foot away, cried robustly and frequently for mysterious reasons, and didn’t really understand words yet. Baby V was simply not interested in the latest dispatch from Bear County. When I started reading child development expert Elaine Reese’s new book Tell Me a Story, I realized that I was not the only one with idyllic story time dreams. Babies and toddlers are squirmy, active people with short attention spans. “Why, then, do we cling to this soft-focus view of storytelling when we know it is unrealistic?” she writes. These days, as Baby V closes in on the 1-year mark, she has turned into a most definite book lover. But it’s not the stories that enchant her. It’s holding the book, turning its pages back to front to back again, flipping it over and generally showing it who’s in charge. Every so often I can entice Baby V to sit on my lap with a book, but we never read through a full story. Instead, we linger on the page with all the junk food that the Hungry Caterpillar chomps through, sticking our fingers in the little holes in the pages. And we make Froggy pop in and out of the bucket. And we study the little goats as they climb up and up and up on the hay bales. © Society for Science & the Public 2000 - 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19155 - Posted: 01.21.2014

by Laura Sanders Baby V sailed through her first Christmas with the heart of a little explorer. She travelled to frigid upstate New York where she mashed snow in her cold little hands, tasted her great grandma’s twice baked potato and was licked clean by at least four dogs. And she opened lots of presents. It’s totally true what people say about little kids and gifts: The wrapping paper proved to be the biggest hit. But in the Christmas aftermath, one of Baby V’s new toys really caught her attention. She cannot resist her singing, talking book. The book has only three pages, but Baby V is smitten. Any time the book pipes up, which it seems to do randomly, she snaps to attention, staring at it, grabbing it and trying to figure it out. With a cutesy high-pitched voice, the book tells Baby V to “Turn the pa-AYE-ge!” and “This is fun!” Sometimes, the book bursts into little songs, all the while maintaining the cheeriest, squeakiest, sugarplum-drenched tone, even when it’s saying something kind of sad: “Three little kittens have lost their mittens and they began to cry!” The book maker (uh, author?) clearly knows how to tap into infants’ deep love for happy, squeaky noises, as does the creator of Elmo. Scientists are also noticing this trend, and are starting to figure out exactly why these sounds are so alluring to little ones. © Society for Science & the Public 2000 - 2014.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19111 - Posted: 01.08.2014

By Janelle Weaver Children with a large vocabulary experience more success at school and in the workplace. How much parents talk to their children plays a major role, but new research shows that it is not just the quantity but also the quality of parental input that matters. Helpful gestures and meaningful glances may allow kids to grasp concepts more easily than they otherwise would. In a study published in June in the Proceedings of the National Academy of Sciences USA, Erica Cartmill of the University of Chicago and her collaborators videotaped parents in their homes as they read books and played games with their 14- or 18-month-old children. The researchers created hundreds of 40-second muted video clips of these interactions. Another set of study participants watched the videos and used clues from the scenes to guess which nouns the parents were saying at various points in the sequences. The researchers used the accuracy of these guesses to rate how well a parent used nonverbal cues, such as gesturing toward and looking at objects, to clarify a word's meaning. Cartmill and her team found that the quality of parents' nonverbal signaling predicted the size of their children's vocabulary three years later. Surprisingly, socioeconomic status did not play a role in the quality of the parents' nonverbal signaling. This result suggests that the well-known differences in children's vocabulary size across income levels are likely the result of how much parents talk to their children, which is known to differ by income, rather than how much nonverbal help they offer during those interactions. © 2013 Scientific American

Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Language and Our Divided Brain
Link ID: 19028 - Posted: 12.12.2013

Elizabeth Pennisi Speak easy. The language gene FOXP2 may work through a protein partner that stimulates the formation of excitatory connections (green) in nerve cells (magenta). Few genes have made the headlines as much as FOXP2. The first gene associated with language disorders, it was later implicated in the evolution of human speech. Girls make more of the FOXP2 protein, which may help explain their precociousness in learning to talk. Now, neuroscientists have figured out how one of its molecular partners helps Foxp2 exert its effects. The findings may eventually lead to new therapies for inherited speech disorders, says Richard Huganir, the neurobiologist at Johns Hopkins University School of Medicine in Baltimore, Maryland, who led the work. Foxp2 controls the activity of a gene called Srpx2, he notes, which helps some of the brain's nerve cells beef up their connections to other nerve cells. By establishing what SRPX2 does, researchers can look for defective copies of it in people suffering from problems talking or learning to talk. Until 2001, scientists were not sure how genes influenced language. Then Simon Fisher, a neurogeneticist now at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, and his colleagues fingered FOXP2 as the culprit in a family with several members who had trouble with pronunciation, putting words together, and understanding speech. These people cannot move their tongue and lips precisely enough to talk clearly, so even family members often can’t figure out what they are saying. It “opened a molecular window on the neural basis of speech and language,” Fisher says. © 2013 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18865 - Posted: 11.02.2013

Daniel Cossins It may not always seem like it, but humans usually take turns speaking. Research published today in Current Biology1 shows that marmosets, too, wait for each other to stop calling before they respond during extended vocal exchanges. The discovery could help to explain how humans came to be such polite conversationalists. Taking turns is a cornerstone of human verbal communication, and is common across all languages. But with no evidence that non-human primates 'converse' similarly, it was not clear how such behaviour evolved. The widely accepted explanation, known as the gestural hypothesis, suggests that humans might somehow have taken the neural machinery underlying cooperative manual gestures such as pointing to something to attract another person's attention to it, and applied that to vocalization. Not convinced, a team led by Daniel Takahashi, a neurobiologist at Princeton University in New Jersey, wanted to see whether another primate species is capable of cooperative calling. The researchers turned to common marmosets (Callithrix jacchus) because, like humans, they are prosocial — that is, generally friendly towards each other — and they communicate using vocalizations. After you The team recorded exchanges between pairs of marmosets that could hear but not see each other, and found that the monkeys never called at the same time. Instead, they always waited for roughly 5 seconds after a caller had finished before responding. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18808 - Posted: 10.19.2013

by Bruce Bower Babies may start to learn their mother tongues even before seeing their mothers’ faces. Newborns react differently to native and foreign vowel sounds, suggesting that language learning begins in the womb, researchers say. Infants tested seven to 75 hours after birth treated spoken variants of a vowel sound in their home language as similar, evidence that newborns regard these sounds as members of a common category, say psychologist Christine Moon of Pacific Lutheran University in Tacoma, Wash., and her colleagues. Newborns deemed different versions of a foreign vowel sound to be dissimilar and unfamiliar, the scientists report in an upcoming Acta Paediatrica. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” Moon says. Fetuses can hear outside sounds by about 10 weeks before birth. Until now, evidence suggested that prenatal learning was restricted to the melody, rhythm and loudness of voices (SN: 12/5/09, p. 14). Earlier investigations established that 6-month-olds group native but not foreign vowel sounds into categories. Moon and colleagues propose that, in the last couple months of gestation, babies monitor at least some vowels — the loudest and most expressive speech sounds — uttered by their mothers. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18769 - Posted: 10.09.2013

By Helen Briggs BBC News The brain has a critical window for language development between the ages of two and four, brain scans suggest. Environmental influences have their biggest impact before the age of four, as the brain's wiring develops to process new words, say UK and US scientists. The research in The Journal of Neuroscience suggests disorders causing language delay should be tackled early. It also explains why young children are good at learning two languages. The scientists, based at King's College London, and Brown University, Rhode Island, studied 108 children with normal brain development between the ages of one and six. They used brain scans to look at myelin - the insulation that develops from birth within the circuitry of the brain. To their surprise, they found the distribution of myelin is fixed from the age of four, suggesting the brain is most plastic in very early life. Any environmental influences on brain development will be strongest in infanthood, they predict. This explains why immersing children in a bilingual environment before the age of four gives them the best chance of becoming fluent in both languages, the research suggests. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18768 - Posted: 10.09.2013

By Melissa Hogenboom Science reporter, BBC News Moving in time to a steady beat is closely linked to better language skills, a study suggests. People who performed better on rhythmic tests also showed enhanced neural responses to speech sounds. The researchers suggest that practising music could improve other skills, particularly speech. In the Journal of Neuroscience, the authors argue that rhythm is an integral part of language. "We know that moving to a steady beat is a fundamental skill not only for music performance but one that has been linked to language skills," said Nina Kraus, of the Auditory Neuroscience Laboratory at Northwestern University in Illinois. More than 100 teenagers were asked to tap their fingers along to a beat. Their accuracy was measured by how closely their responses matched the timing of a metronome. Next, in order to understand the biological basis of rhythmic ability, the team also measured the brainwaves of their participants with electrodes, a technique called electroencephalography. This was to observe the electrical activity in the brain in response to sound. Using this biological approach, the researchers found that those who had better musical training also had enhanced neural responses to speech sounds. In poorer readers this response was diminished. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 5: The Sensorimotor System
Link ID: 18665 - Posted: 09.18.2013

By Joshua K. Hartshorne There are two striking features of language that any scientific theory of this quintessentially human behavior must account for. The first is that we do not all speak the same language. This would be a shocking observation were not so commonplace. Communication systems and other animals tend to be universal, with any animal of the species able to communicate with any other. Likewise, many other fundamental human attributes show much less variation. Barring genetic or environmental mishap, we all have two eyes, one mouth, and four limbs. Around the world, we cry when we are sad, smile when we are happy, and laugh when something is funny, but the languages we use to describe this are different. The second striking feature of language is that when you consider the space of possible languages, most languages are clustered in a few tiny bands. That is, most languages are much, much more similar to one another than random variation would have predicted. Starting with pioneering work by Joseph Greenberg, scholars have cataloged over two thousand linguistic universals (facts true of all languages) and biases (facts true of most languages). For instance, in languages with fixed word order, the subject almost always comes before the object. If the verb describes a caused event, the entity that caused the event is the subject ("John broke the vase") not the object (for example, "The vase shbroke John" meaning "John broke the vase"). In languages like English where the verb agrees with one of its subjects or objects, it typically agrees with the subject (compare "the child eats the carrots" with "the children eat the carrots") and not with its object (this would look like "the child eats the carrot" vs. "the child eat the carrots"), though in some languages, like Hungarian, the ending of the verb changes to match both the subject and object. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18664 - Posted: 09.18.2013

By Jason G. Goldman One of the key differences between humans and non-human animals, it is thought, is the ability to flexibly communicate our thoughts to others. The consensus has long been that animal communication, such as the food call of a chimpanzee or the alarm call of a lemur, is the result of an automatic reflex guided primarily by the inner physiological state of the animal. Chimpanzees, for example, can’t “lie” by producing a food call when there’s no food around and, it is thought, they can’t not emit a food call in an effort to hoard it all for themselves. By contrast, human communication via language is far more flexible and intentional. But recent research from across the animal kingdom has cast some doubt on the idea that animal communication always operates below the level of conscious control. Male chickens, for example, call more when females are around, and male Thomas langurs (a monkey native to Indonesia) continue shrieking their alarm calls until all females in their group have responded. Similarly, vervet monkeys are more likely sound their alarm calls when their are other vervet monkeys around, and less likely when they’re alone. The same goes for meerkats. And possibly chimps, as well. Still, these sorts of “audience effects” can be explained by lower-level physiological factors. In yellow-bellied marmots, small ground squirrels native to the western US and southwestern Canada, the production of an alarm call correlates with glucocorticoid production, a physiological measurement of stress. And when researchers experimentally altered the synthesis of glucocorticoids in rhesus macaques, they found a change in the probability of alarm call production. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18605 - Posted: 09.04.2013

by Jacob Aron DOES your brain work like a dictionary? A mathematical analysis of the connections between definitions of English words has uncovered hidden structures that may resemble the way words and their meanings are represented in our heads. "We want to know how the mental lexicon is represented in the brain," says Stevan Harnad of the University of Quebec in Montreal, Canada. As every word in a dictionary is defined in terms of others, the knowledge needed to understand the entire lexicon is there, as long as you first know the meanings of an initial set of starter, or "grounding", words. Harnad's team reasoned that finding this minimal set of words and pinning down its structure might shed light on how human brains put language together. The team converted each of four different English dictionaries into a mathematical structure of linked nodes known as a graph. Each node in this graph represents a word, which is linked to the other words used to define it – so "banana" might be connected to "long", "bendy", "yellow" and "fruit". These words then link to others that define them. This enabled the team to remove all the words that don't define any others, leaving what they call a kernel. The kernel formed roughly 10 per cent of the full dictionary – though the exact percentages depended on the particular dictionary. In other words, 90 per cent of the dictionary can be defined using just the other 10 per cent. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18587 - Posted: 08.31.2013

Beth Skwarecki Be careful what you say around a pregnant woman. As a fetus grows inside a mother's belly, it can hear sounds from the outside world—and can understand them well enough to retain memories of them after birth, according to new research. It may seem implausible that fetuses can listen to speech within the womb, but the sound-processing parts of their brain become active in the last trimester of pregnancy, and sound carries fairly well through the mother's abdomen. "If you put your hand over your mouth and speak, that's very similar to the situation the fetus is in," says cognitive neuroscientist Eino Partanen of the University of Helsinki. "You can hear the rhythm of speech, rhythm of music, and so on." A 1988 study suggested that newborns recognize the theme song from their mother's favorite soap opera. More recent studies have expanded on the idea of fetal learning, indicating that newborns already familiarized themselves with sounds of their parent’s native language; one showed that American newborns seem to perceive Swedish vowel sounds as unfamiliar, sucking on a high-tech pacifier to hear more of the new sounds. Swedish infants showed the same response to English vowels. But those studies were based on babies' behaviors, which can be tricky to test. Partanen and his team decided instead to outfit babies with EEG sensors to look for neural traces of memories from the womb. "Once we learn a sound, if it's repeated to us often enough, we form a memory of it, which is activated when we hear the sound again," he explains. This memory speeds up recognition of sounds in the learner's native language and can be detected as a pattern of brain waves, even in a sleeping baby. © 2012 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18570 - Posted: 08.27.2013

By Glen Tellis, Rickson C. Mesquita, and Arjun G. Yodh Terrence Murgallis, a 20 year-old undergraduate student in the Department of Speech-Language Pathology at Misericordia University has stuttered all his life and approached us recently about conducting brain research on stuttering. His timing was perfect because our research group, in collaboration with a team led by Dr. Arjun Yodh in the Department of Physics and Astronomy at the University of Pennsylvania, had recently deployed two novel optical methods to compare blood flow and hemoglobin concentration differences in the brains of those who stutter with those who are fluent. These noninvasive methods employ diffusing near-infrared light and have been dubbed near-infrared spectroscopy (NIRS) for concentration dynamics, and diffuse correlation spectroscopy (DCS) for flow dynamics. The near-infrared light readily penetrates through intact skull to probe cortical regions of the brain. The low power light has no known side-effects and has been successfully utilized for a variety of clinical studies in infants, children, and adults. DCS measures fluctuations of scattered light due to moving targets in the tissue (mostly red blood cells). The technique measures relative changes in cerebral blood flow. NIRS uses the relative transmission of different colors of light to detect hemoglobin concentration changes in the interrogated tissues. Though there are numerous diagnostic tools available to study brain activity, including positron emission tomography (PET), magnetic resonance imaging (MRI), and magnetoencephalography (MEG), these methods are often invasive and/or expensive to administer. In the particular case of electroencephalography (EEG), its low spatial resolution is a significant limitation for investigations of verbal fluency. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 5: The Sensorimotor System
Link ID: 18426 - Posted: 07.30.2013

By Michelle Warwicker BBC Nature Individual wild wolves can be recognised by just their howls with 100% accuracy, a study has shown. The team from Nottingham Trent University, UK, developed a computer program to analyse the vocal signatures of eastern grey wolves. Wolves roam huge home ranges, making it difficult for conservationists to track them visually. But the technology could provide a way for experts to monitor individual wolves by sound alone. "Wolves howl a lot in the wild," said PhD student Holly Root-Gutteridge, who led the research. "Now we can be sure... exactly which wolf it is that's howling." The team's findings are published in the journal Bioacoustics. Wolves use their distinctive calls to protect territory from rivals and to call to other pack members. "They enjoy it as a group activity," said Ms Root-Gutteridge, "When you get a chorus howl going they all join in." The team's computer program is unique because it analyses both volume (or amplitude) and pitch (or frequency) of wolf howls, whereas previously scientists had only examined the animals' pitch. "Think of [pitch] as the note the wolf is singing," explained Ms Root-Gutteridge. "What we've added now is the amplitude - or volume - which is basically how loud it's singing at different times." "It's a bit like language: If you put the stress in different places you form a different sound." BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18401 - Posted: 07.23.2013

By Rebecca Morelle Science reporter, BBC World Service Scientists have found further evidence that dolphins call each other by "name". Research has revealed that the marine mammals use a unique whistle to identify each other. A team from the University of St Andrews in Scotland found that when the animals hear their own call played back to them, they respond. The study is published in the Proceedings of the National Academy of Sciences. Dr Vincent Janik, from the university's Sea Mammal Research Unit, said: "(Dolphins) live in this three-dimensional environment, offshore without any kind of landmarks and they need to stay together as a group. "These animals live in an environment where they need a very efficient system to stay in touch." It had been-long suspected that dolphins use distinctive whistles in much the same way that humans use names. Previous research found that these calls were used frequently, and dolphins in the same groups were able to learn and copy the unusual sounds. But this is the first time that the animals response to being addressed by their "name" has been studied. To investigate, researchers recorded a group of wild bottlenose dolphins, capturing each animal's signature sound. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18400 - Posted: 07.23.2013

By NICHOLAS BAKALAR There are many dying languages in the world. But at least one has recently been born, created by children living in a remote village in northern Australia. Carmel O’Shannessy, a linguist at the University of Michigan, has been studying the young people’s speech for more than a decade and has concluded that they speak neither a dialect nor the mixture of languages called a creole, but a new language with unique grammatical rules. The language, called Warlpiri rampaku, or Light Warlpiri, is spoken only by people under 35 in Lajamanu, an isolated village of about 700 people in Australia’s Northern Territory. In all, about 350 people speak the language as their native tongue. Dr. O’Shannessy has published several studies of Light Warlpiri, the most recent in the June issue of Language. “Many of the first speakers of this language are still alive,” said Mary Laughren, a research fellow in linguistics at the University of Queensland in Australia, who was not involved in the studies. One reason Dr. O’Shannessy’s research is so significant, she said, “is that she has been able to record and document a ‘new’ language in the very early period of its existence.” Everyone in Lajamanu also speaks “strong” Warlpiri, an aboriginal language unrelated to English and shared with about 4,000 people in several Australian villages. Many also speak Kriol, an English-based creole developed in the late 19th century and widely spoken in northern Australia among aboriginal people of many different native languages. Lajamanu parents are happy to have their children learn English in school for use in the wider world, but eager to preserve Warlpiri as the language of their culture. There is an elementary school in Lajamanu, but most children go to boarding school in Darwin for secondary education. The language there is English. But they continue to speak Light Warlpiri among themselves. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18376 - Posted: 07.15.2013

By TIM REQUARTH and MEEHAN CRIST Babies learn to speak months after they begin to understand language. As they are learning to talk, they babble, repeating the same syllable (“da-da-da”) or combining syllables into a string (“da-do-da-do”). But when babies babble, what are they actually doing? And why does it take them so long to begin speaking? Insights into these mysteries of human language acquisition are now coming from a surprising source: songbirds. Researchers who focus on infant language and those who specialize in birdsong have teamed up in a new study suggesting that learning the transitions between syllables — from “da” to “do” and “do” to “da” — is the crucial bottleneck between babbling and speaking. “We’ve discovered a previously unidentified component of vocal development,” said the lead author, Dina Lipkind, a psychology researcher at Hunter College in Manhattan. “What we’re showing is that babbling is not only to learn sounds, but also to learn transitions between sounds.” The results provide insight into language acquisition and may eventually help shed light on human speech disorders. “Every time you find out something fundamental about the way development works, you gain purchase on what happens when children are at risk for disorder,” said D. Kimbrough Oller, a language researcher at the University of Memphis, who was not involved in the study. At first, however, the scientists behind these findings weren’t studying human infants at all. They were studying birds. “When I got into this, I never believed we were going to learn about human speech,” said Ofer Tchernichovski, a birdsong researcher at Hunter and the senior author of the study, published online on May 29 in the journal Nature. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18332 - Posted: 07.01.2013

Did that prairie dog just call you fat? Quite possibly. On The Current Friday, biologist Con Slobodchikoff described how he learned to understand what prairie dogs are saying to one another and discovered how eloquent they can be. Slobodchikoff, a professor emeritus at North Arizona University, told Erica Johnson, guest host of The Current, that he started studying prairie dog language 30 years ago after scientists reported that other ground squirrels had different alarm calls to warn each other of flying predators such as hawks and eagles, versus predators on the ground, such as coyotes or badgers. Prairie dogs, he said, were ideal animals to study because they are social animals that live in small co-operative groups within a larger colony, or "town" and they never leave their colony or territory, where they have built an elaborate underground complex of tunnels and burrows. In order to figure out what the prairie dogs were saying, Slobodchikoff and his colleagues trapped them and painted them with fur dye to identify each one. Then they recorded the animals' calls in the presence of different predators. They found that the animals make distinctive calls that can distinguish between a wide variety of animals, including coyotes, domestic dogs and humans. The patterns are so distinct, Slobodchikoff said, that human visitors that he brings to a prairie dog colony can typically learn them within two hours. But then Slobodchikoff noticed that the animals made slightly different calls when different individuals of the same species went by. © CBC 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18300 - Posted: 06.22.2013

by Emily Underwood Something odd happened when Shu Zhang was giving a presentation to her classmates at the Columbia Business School in New York City. Zhang, a Chinese native, spoke fluent English, yet in the middle of her talk, she glanced over at her Chinese professor and suddenly blurted out a word in Mandarin. "I meant to say a transition word like 'however,' but used the Chinese version instead," she says. "It really shocked me." Shortly afterward, Zhang teamed up with Columbia social psychologist Michael Morris and colleagues to figure out what had happened. In a new study, they show that reminders of one's homeland can hinder the ability to speak a new language. The findings could help explain why cultural immersion is the most effective way to learn a foreign tongue and why immigrants who settle within an ethnic enclave acculturate more slowly than those who surround themselves with friends from their new country. Previous studies have shown that cultural icons such as landmarks and celebrities act like "magnets of meaning," instantly activating a web of cultural associations in the mind and influencing our judgments and behavior, Morris says. In an earlier study, for example, he asked Chinese Americans to explain what was happening in a photograph of several fish, in which one fish swam slightly ahead of the others. Subjects first shown Chinese symbols, such as the Great Wall or a dragon, interpreted the fish as being chased. But individuals primed with American images of Marilyn Monroe or Superman, in contrast, tended to interpret the outlying fish as leading the others. This internally driven motivation is more typical of individualistic American values, some social psychologists say, whereas the more externally driven explanation of being pursued is more typical of Chinese culture. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18283 - Posted: 06.18.2013