Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 481

By TIM REQUARTH and MEEHAN CRIST Babies learn to speak months after they begin to understand language. As they are learning to talk, they babble, repeating the same syllable (“da-da-da”) or combining syllables into a string (“da-do-da-do”). But when babies babble, what are they actually doing? And why does it take them so long to begin speaking? Insights into these mysteries of human language acquisition are now coming from a surprising source: songbirds. Researchers who focus on infant language and those who specialize in birdsong have teamed up in a new study suggesting that learning the transitions between syllables — from “da” to “do” and “do” to “da” — is the crucial bottleneck between babbling and speaking. “We’ve discovered a previously unidentified component of vocal development,” said the lead author, Dina Lipkind, a psychology researcher at Hunter College in Manhattan. “What we’re showing is that babbling is not only to learn sounds, but also to learn transitions between sounds.” The results provide insight into language acquisition and may eventually help shed light on human speech disorders. “Every time you find out something fundamental about the way development works, you gain purchase on what happens when children are at risk for disorder,” said D. Kimbrough Oller, a language researcher at the University of Memphis, who was not involved in the study. At first, however, the scientists behind these findings weren’t studying human infants at all. They were studying birds. “When I got into this, I never believed we were going to learn about human speech,” said Ofer Tchernichovski, a birdsong researcher at Hunter and the senior author of the study, published online on May 29 in the journal Nature. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18332 - Posted: 07.01.2013

Did that prairie dog just call you fat? Quite possibly. On The Current Friday, biologist Con Slobodchikoff described how he learned to understand what prairie dogs are saying to one another and discovered how eloquent they can be. Slobodchikoff, a professor emeritus at North Arizona University, told Erica Johnson, guest host of The Current, that he started studying prairie dog language 30 years ago after scientists reported that other ground squirrels had different alarm calls to warn each other of flying predators such as hawks and eagles, versus predators on the ground, such as coyotes or badgers. Prairie dogs, he said, were ideal animals to study because they are social animals that live in small co-operative groups within a larger colony, or "town" and they never leave their colony or territory, where they have built an elaborate underground complex of tunnels and burrows. In order to figure out what the prairie dogs were saying, Slobodchikoff and his colleagues trapped them and painted them with fur dye to identify each one. Then they recorded the animals' calls in the presence of different predators. They found that the animals make distinctive calls that can distinguish between a wide variety of animals, including coyotes, domestic dogs and humans. The patterns are so distinct, Slobodchikoff said, that human visitors that he brings to a prairie dog colony can typically learn them within two hours. But then Slobodchikoff noticed that the animals made slightly different calls when different individuals of the same species went by. © CBC 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18300 - Posted: 06.22.2013

by Emily Underwood Something odd happened when Shu Zhang was giving a presentation to her classmates at the Columbia Business School in New York City. Zhang, a Chinese native, spoke fluent English, yet in the middle of her talk, she glanced over at her Chinese professor and suddenly blurted out a word in Mandarin. "I meant to say a transition word like 'however,' but used the Chinese version instead," she says. "It really shocked me." Shortly afterward, Zhang teamed up with Columbia social psychologist Michael Morris and colleagues to figure out what had happened. In a new study, they show that reminders of one's homeland can hinder the ability to speak a new language. The findings could help explain why cultural immersion is the most effective way to learn a foreign tongue and why immigrants who settle within an ethnic enclave acculturate more slowly than those who surround themselves with friends from their new country. Previous studies have shown that cultural icons such as landmarks and celebrities act like "magnets of meaning," instantly activating a web of cultural associations in the mind and influencing our judgments and behavior, Morris says. In an earlier study, for example, he asked Chinese Americans to explain what was happening in a photograph of several fish, in which one fish swam slightly ahead of the others. Subjects first shown Chinese symbols, such as the Great Wall or a dragon, interpreted the fish as being chased. But individuals primed with American images of Marilyn Monroe or Superman, in contrast, tended to interpret the outlying fish as leading the others. This internally driven motivation is more typical of individualistic American values, some social psychologists say, whereas the more externally driven explanation of being pursued is more typical of Chinese culture. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18283 - Posted: 06.18.2013

// by Jennifer Viegas It goes a little something like this: A young male zebra finch, whose father taught him a song, shared that song with a brother, with the two youngsters then creating new tunes based on dad’s signature sound. The musical bird family, described in the latest Biology Letters, strengthens evidence that imitation between siblings and similar-aged youngsters facilitates vocal learning. The theory could help to explain why families with multiple same sex siblings, such as the Bee Gees and the Jackson 5, often form such successful musical groups. Co-author Sébastien Derégnaucourt told Discovery News that, among humans, “infants have a visual preference for peers of the same age, which may facilitate imitation.” He added that it’s also “known that children can have an impact on each other’s language acquisition, such as in the case of the emergence of creole languages, whether spoken or signed, among children exposed to pidgin (a grammatically simplified form of a language).” Pidgin in this case is more like pigeon, since the study focused on birds. Derégnaucourt, an associate professor at University Paris West, collaborated with Manfred Gahr of the Max Planck Institute for Ornithology. The two researchers studied how the young male zebra finch from a bird colony in Germany learned from his avian dad. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 18260 - Posted: 06.12.2013

by Tia Ghose, LiveScience Ape and human infants at comparable stages of development use similar gestures, such as pointing or lifting their arms to be picked up, new research suggests. Chimpanzee, bonobo and human babies rely mainly on gestures at about a year old, and gradually develop symbolic language (words, for human babies; and signs, for apes) as they get older. The findings suggest that “gesture plays an important role in the evolution of language, because it preceded language use across the species," said study co-author Kristen Gillespie-Lynch, a developmental psychologist at the College of Staten Island in New York. The idea that language arose from gesture and a primitive sign language has a long history. French philosopher Étienne Bonnot de Condillac proposed the idea in 1746, and other scientists have noted that walking on two legs, which frees up the hands for gesturing, occurred earlier in human evolution than changes to the vocal tract that enabled speaking. But although apes in captivity can learn some language by learning from humans, in the wild, they don't gesture nearly as much as human infants, making it difficult to tease out commonalities in language development that have biological versus environmental roots. © 2013 Discovery Communications, LLC

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18247 - Posted: 06.08.2013

Karen Ravn Babies learn to babble before they learn to talk, at first simply repeating individual syllables (as in ba-ba-ba), and later stringing various syllables together (as in ba-da-goo). Songbirds exhibit similar patterns during song-learning, and the capacity for this sort of syllable sequencing is widely believed to be innate and to emerge full-blown — a theory that is challenged by a paper published on Nature's website today1. A study of three species — zebra finches, Bengalese finches and humans — reports that none of the trio has it that easy. Their young all have to learn how to string syllables together slowly, pair by pair. “We discovered a previously unsuspected stage in human vocal development,” says first author Dina Lipkind, a psychologist now at Hunter College in New York. The researchers began by training young zebra finches (Taeniopygia guttata) to sing a song in which three syllables represented by the letters A, B and C came in the order ABC–ABC. They then trained the birds to sing a second song in which the same syllables were strung together in a different order, ACB–ACB. Eight out of seventeen birds managed to learn the second song, but they did not do so in one fell swoop. They learned it as a series of syllable pairs, first, say, learning to go from A to C, then from C to B and finally from B to A. And they didn’t do it overnight, as the innate-sequencing theory predicts. Instead, on average, they learned the first pair in about ten days, the second in four days and the third in two days. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18208 - Posted: 05.30.2013

By Bruce Bower Chaser isn’t just a 9-year-old border collie with her breed’s boundless energy, intense focus and love of herding virtually anything. She’s a grammar hound. In experiments directed by her owner, psychologist John Pilley of Wofford College in Spartanburg, S.C., Chaser demonstrated her grasp of the basic elements of grammar by responding correctly to commands such as “to ball take Frisbee” and its reverse, “to Frisbee take ball.” The dog had previous, extensive training to recognize classes of words including nouns, verbs and prepositions. “Chaser intuitively discovered how to comprehend sentences based on lots of background learning about different types of words,” Pilley says. He reports the results May 13 in Learning and Motivation. Throughout the first three years of Chaser’s life, Pilley and a colleague trained the dog to recognize and fetch more than 1,000 objects by name. Using praise and play as reinforcements, the researchers also taught Chaser the meaning of different types of words, such as verbs and prepositions. As a result, Chaser learned that phrases such as “to Frisbee” meant that she should take whatever was in her mouth to the named object. Exactly how the dog gained her command of grammar is unclear, however. Pilley suspects that Chaser first mentally linked each of two nouns she heard in a sentence to objects in her memory. Then the canine held that information in mind while deciding which of two objects to bring to which of two other objects. Pilley’s work follows controversial studies of grammar understanding in dolphins and a pygmy chimp. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18184 - Posted: 05.22.2013

by Elizabeth Norton If you've ever cringed when your parents said "groovy," you'll know that spoken language can have a brief shelf life. But frequently used words can persist for generations, even millennia, and similar sounds and meanings often turn up in very different languages. The existence of these shared words, or cognates, has led some linguists to suggest that seemingly unrelated language families can be traced back to a common ancestor. Now, a new statistical approach suggests that peoples from Alaska to Europe may share a linguistic forebear dating as far back as the end of the Ice Age, about 15,000 years ago. "Historical linguists study language evolution using cognates the way biologists use genes," explains Mark Pagel, an evolutionary theorist at the University of Reading in the United Kingdom. For example, although about 50% of French and English words derive from a common ancestor (like "mere" and "mother," for example), with English and German the rate is closer to 70%—indicating that while all three languages are related, English and German have a more recent common ancestor. In the same vein, while humans, chimpanzees, and gorillas have common genes, the fact that humans share almost 99% of their DNA with chimps suggests that these two primate lineages split apart more recently. Because words don't have DNA, researchers use cognates found in different languages today to reconstruct the ancestral "protowords." Historical linguists have observed that over time, the sounds of words tend to change in regular patterns. For example, the p sound frequently changes to f, and the t sound to th—suggesting that the Latin word pater is, well, the father of the English word father. Linguists use these known rules to work backward in time, making a best guess at how the protoword sounded. They also track the rate at which words change. Using these phylogenetic principles, some researchers have dated many common words as far back as 9000 years ago. The ancestral language known as Proto-Indo-European, for example, gave rise to languages including Hindi, Russian, French, English, and Gaelic. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18130 - Posted: 05.07.2013

By Christie Wilcox What does your voice say about you? Our voices communicate information far beyond what we say with our words. Like most animals, the sounds we produce have the potential to convey how healthy we are, what mood we’re in, even our general size. Some of these traits are important cues for potential mates, so much so that the sound of your voice can actually affect how good looking you appear to others. Which, really, brings up one darn good question: what makes a voice sound sexy? To find out, a team spearheaded by University College London researcher Xi Yu created synthetic male and female voices and altered their pitch, vocal quality and formant spacing (an acoustics term related to the frequencies of sound), the last of which is related to body size. They also adjusted the voices to be normal (relaxed), breathy, or pressed (tense). Through several listening experiments, they asked participants of the opposite gender to say which voice was the most attractive and which sounded the friendliest or happiest. The happiest-sounding voices were those with higher pitch, whether male or female, while the angriest were those with dense formants, indicating large body size. As for attractiveness, the men preferred a female voice that is high-pitched, breathy and had wide formant spacing, which indicates a small body size. The women, on the other hand, preferred a male voice with low pitch and dense formant spacing, indicative of larger size. But what really surprised the scientists is that women also preferred their male voices breathy. “The breathiness in the male voice attractiveness rating is intriguing,” explain the authors, “as it could be a way of neutralizing the aggressiveness associated with a large body size.”

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 18091 - Posted: 04.30.2013

Posted by Christy Ullrich Elephants may use a variety of subtle movements and gestures to communicate with one another, according to researchers who have studied the big mammals in the wild for decades. To the casual human observer, a curl of the trunk, a step backward, or a fold of the ear may not have meaning. But to an elephant—and scientists like Joyce Poole—these are signals that convey vital information to individual elephants and the overall herd. Biologist and conservationist Joyce Poole and her husband, Petter Granli, both of whom direct ElephantVoices, a charity they founded to research and advocate for conservation of elephants in various sanctuaries in Africa, have developed an online database decoding hundreds of distinct elephant signals and gestures. The postures and movements underscore the sophistication of elephant communication, they say. Poole and Granli have also deciphered the meaning of acoustic communication in elephants, interpreting the different rumbling, roaring, screaming, trumpeting, and other idiosyncratic sounds that elephants make in concert with postures such as the positioning and flapping of their ears. Poole has studied elephants in Africa for more than 37 years, but only began developing the online gestures database in the past decade. Some of her research and conservation work has been funded by the National Geographic Society. “I noticed that when I would take out guests visiting Amboseli [National Park in Kenya] and was narrating the elephants’ behavior, I got to the point where 90 percent of the time, I could predict what the elephant was about to do,” Poole said in an interview. “If they stood a certain way, they were afraid and were about to retreat, or [in another way] they were angry and were about to move toward and threaten another.” © 1996-2012 National Geographic Society.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18078 - Posted: 04.27.2013

by Tanya Lewis, The lip-smacking vocalizations gelada monkeys make are surprisingly similar to human speech, a new study finds. Many nonhuman primates demonstrate lip-smacking behavior, but geladas are the only ones known to make undulating sounds, known as "wobbles," at the same time. (The wobbling sounds a little like a human hum would sound if the volume were being turned on and off rapidly.) The findings show that lip-smacking could have been an important step in the evolution of human speech, researchers say. "Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," Thore Bergman of the University of Michigan in Ann Arbor and author of the study published today (April 8) in the journal Current Biology,said in a statement. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speechlike sounds." NEWS: Lip Smacks of Monkeys Prelude to Speech? Lip-smacking -- rapidly opening and closing the mouth and lips -- shares some of the features of human speech, such as rapid fluctuations in pitch and volume. (See Video of Gelada Lip-Smacking) Bergman first noticed the similarity while studying geladas in the remote mountains of Ethiopia. He would often hear vocalizations that sounded like human voices, but the vocalizations were actually coming from the geladas, he said. He had never come across other primates who made these sounds. But then he read a study on macaques from 2012 revealing how facial movements during lip-smacking were very speech-like, hinting that lip-smacking might be an initial step toward human speech. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18014 - Posted: 04.10.2013

By Bruce Bower Babies take a critical step toward learning to speak before they can say a word or even babble. By 3 months of age, infants flexibly use three types of sounds — squeals, growls and vowel-like utterances — to express a range of emotions, from positive to neutral to negative, researchers say. Attaching sounds freely to different emotions represents a basic building block of spoken language, say psycholinguist D. Kimbrough Oller of the University of Memphis in Tennessee and his colleagues. Any word or phrase can signal any mental state, depending on context and pronunciation. Infants’ flexible manipulation of sounds to signal how they feel lays the groundwork for word learning, the scientists conclude April 1 in the Proceedings of the National Academy of Sciences. Language evolution took off once this ability emerged in human babies, Oller proposes. Ape and monkey researchers have mainly studied vocalizations that have one meaning, such as distress calls. “At this point, the conservative conclusion is that the human infant at 3 months is already vocally freer than has been demonstrated for any other primate at any age,” Oller says. Oller’s group videotaped infants playing and interacting with their parents in a lab room equipped with toys and furniture. Acoustic analyses identified nearly 7,000 utterances made by infants up to 1 year of age that qualified as laughs, cries, squeals, growls or vowel-like sounds. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17975 - Posted: 04.02.2013

by Lizzie Wade With its complex interweaving of symbols, structure, and meaning, human language stands apart from other forms of animal communication. But where did it come from? A new paper suggests that researchers look to bird songs and monkey calls to understand how human language might have evolved from simpler, preexisting abilities. One reason that human language is so unique is that it has two layers, says Shigeru Miyagawa, a linguist at the Massachusetts Institute of Technology (MIT) in Cambridge. First, there are the words we use, which Miyagawa calls the lexical structure. "Mango," "Amanda," and "eat" are all components of the lexical structure. The rules governing how we put those words together make up the second layer, which Miyagawa calls the expression structure. Take these three sentences: "Amanda eats the mango," "Eat the mango, Amanda," and "Did Amanda eat the mango?" Their lexical structure—the words they use—is essentially identical. What gives the sentences different meanings is the variation in their expression structure, or the different ways those words fit together. The more Miyagawa studied the distinction between lexical structure and expression structure, "the more I started to think, 'Gee, these two systems are really fundamentally different,' " he says. "They almost seem like two different systems that just happen to be put together," perhaps through evolution. One preliminary test of his hypothesis, Miyagawa knew, would be to show that the two systems exist separately in nature. So he started studying the many ways that animals communicate, looking for examples of lexical or expressive structures. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17861 - Posted: 03.02.2013

Regina Nuzzo Despite having brains that are still largely under construction, babies born up to three months before full term can already distinguish between spoken syllables in much the same way that adults do, an imaging study has shown1. Full-term babies — those born after 37 weeks' gestation — display remarkable linguistic sophistication soon after they are born: they recognize their mother’s voice2, can tell apart two languages they’d heard before birth3 and remember short stories read to them while in the womb4. But exactly how these speech-processing abilities develop has been a point of contention. “The question is: what is innate, and what is due to learning immediately after birth?” asks neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France. To answer that, Wallois and his team needed to peek at neural processes already taking place before birth. It is tough to study fetuses, however, so they turned to their same-age peers: babies born 2–3 months premature. At that point, neurons are still migrating to their final destinations; the first connections between upper brain areas are snapping into place; and links have just been forged between the inner ear and cortex. To test these neural pathways, the researchers played soft voices to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain activity using a non-invasive optical imaging technique called functional near-infrared spectroscopy. They were looking for the tell-tale signals of surprise that brains display — for example, when they suddenly hear male and female voices intermingled after hearing a long run of simply female voices. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17852 - Posted: 02.26.2013

By Athena Andreadis Genes are subject to multiple layers of regulation. An early regulatory point is transcription. During this process, regulatory proteins bind to DNA regions (promoters and enhancers) that direct gene expression. These DNA/protein complexes attract the transcription apparatus, which docks next to the complex and proceeds linearly downstream, producing the heteronuclear (hn) RNA that is encoded by the gene linked to the promoter. The hnRNA is then spliced and either becomes structural/regulatory RNA or is translated into protein. Transcription factors are members of large clans that arose from ancestral genes that went through successive duplications and then diverged to fit specific niches. One such family of about fifty members is called FOX. Their DNA binding portion is shaped like a butterfly, which has given this particular motif the monikers of forkhead box or winged helix. The activities of the FOX proteins extend widely in time and region. One of the FOX family members is FOXP2, as notorious as Fox News – except for different reasons: FOXP2 has become entrenched in popular consciousness as “the language gene”. As is the case with all such folklore, there is some truth in this; but as is the case with everything in biology, reality is far more complex. FOXP2, the first gene found to “affect language” (more on this anon), was discovered in 2001 by several converging observations and techniques. The clincher was a large family (code name KE), some of whose members had severe articulation and grammatical deficits with no accompanying sensory or cognitive impairment. The inheritance is autosomal dominant: one copy of the mutated gene is sufficient to confer the trait. When the researchers definitively identified the FOXP2 gene, they found that the version of FOXP2 carried by the KE affected members has a single point mutation that alters an invariant residue in its forkhead domain, thereby influencing the protein’s binding to its DNA targets. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 17845 - Posted: 02.25.2013

Regina Nuzzo Say the word 'rutabaga', and you have just performed a complex dance with many body parts — lips, tongue, jaw and larynx — in a flash of time. Yet little is known about how the brain coordinates these vocal-tract movements to keep even the clumsiest of us from constantly tripping over our own tongues. A study of unprecedented detail now provides a glimpse into the neural codes that control the production of smooth speech. The results help to clarify how the brain uses muscles to organize sounds and hint at why tongue twisters are so tricky. The work is published today in Nature1. Most neural information about the vocal tract has come from watching people with brain damage or from non-invasive imaging methods, neither of which provide detailed data in time or space2, 3. A team of US researchers has now collected brain-activity data on a scale of millimetres and milliseconds. The researchers recorded brain activity in three people with epilepsy using electrodes that had been implanted in the patients' cortices as part of routine presurgical electrophysiological sessions. They then watched to see what happened when the patients articulated a series of syllables. Sophisticated multi-dimensional statistical procedures enabled the researchers to sift through the huge amounts of data and uncover how basic neural building blocks — patterns of neurons firing in different places over time — combine to form the speech sounds of American English. The patterns for consonants were quite different from those for vowels, even though the parts of speech “use the exact same parts of the vocal tract”, says author Edward Chang, a neuroscientist at the University of California, San Francisco. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17838 - Posted: 02.23.2013

by Sara Reardon Like the musicians in an orchestra, our lips, tongue and vocal cords coordinate with one another to pronounce sounds in speech. A map of the brain regions that conduct the process shows how each is carefully controlled – and how mistakes can slip into our speech. It's long been thought that the brain coordinates our speech by simultaneously controlling the movement of these "articulators". In the 1860s, Alexander Melville Bell proposed that speech could be broken down in this way and designed a writing system for deaf people based on the principle. But brain imaging had not had the resolution to see how neurons control these movements – until now. Using electrodes implanted in the brains of three people to treat their epilepsy, Edward Chang and his colleagues at the University of California mapped brain activity in each volunteer's motor cortex as they pronounced words in American English. The team had expected that each speech sound would be controlled by a unique collection of neurons, and so each would map to a different part of the brain. Instead, they found that the same groups of neurons were activated for all sounds. Each group controls muscles in the tongue, lips, jaw and larynx. The neurons – in the sensorimotor cortex – coordinated with one another to fire in different combinations. Each combination resulted in a very precise placing of the articulators to generate a given sound. Surprisingly, although each articulator can theoretically take on an almost limitless range of shapes, the neurons imposed strict limits on the range of possibilities. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17837 - Posted: 02.23.2013

by Michael Balter Despite recent progress toward sexual equality, it's still a man's world in many ways. But numerous studies show that when it comes to language, girls start off with better skills than boys. Now, scientists studying a gene linked to the evolution of vocalizations and language have for the first time found clear sex differences in its activity in both rodents and humans, with the gene making more of its protein in girls. But some researchers caution against drawing too many conclusions about the gene's role in human and animal communication from this study. Back in 2001, the world of language research was rocked by the discovery that a gene called FOXP2 appeared to be essential for the production of speech. Researchers cautioned that FOXP2 is probably only one of many genes involved in human communication, but later discoveries seemed to underscore its importance. For example, the human version of the protein produced by the gene differs by two amino acids from that of chimpanzees, and seems to have undergone natural selection since the human and chimp lineages split between 5 million and 7 million years ago. (Neandertals were found to have the same version as Homo sapiens, fueling speculation that our evolutionary cousins also had language). In the years since, FOXP2 has been implicated in the vocalizations of other animals, including mice, singing birds, and even bats. During this same time period, a number of studies have confirmed past research suggesting that young girls learn language faster and earlier than boys, producing their first words and sentences sooner and accumulating larger vocabularies faster. But the reasons behind such findings are highly controversial because it is difficult to separate the effects of nature versus nurture, and the differences gradually disappear as children get older. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 8: Hormones and Sex; Chapter 15: Language and Our Divided Brain
Link ID: 17830 - Posted: 02.20.2013

by Virginia Morell Every bottlenose dolphin has its own whistle, a high-pitched, warbly "eeee" that tells the other dolphins that a particular individual is present. Dolphins are excellent vocal mimics, too, able to copy even quirky computer-generated sounds. So, scientists have wondered if dolphins can copy each other's signature whistles—which would be very similar to people saying each others' names. Now, an analysis of whistles recorded from hundreds of wild bottlenose dolphins confirms that they can indeed "name" each other, and suggests why they do so—a discovery that may help researchers translate more of what these brainy marine mammals are squeaking, trilling, and clicking about. "It's a wonderful study, really solid," says Peter Tyack, a marine mammal biologist at the University of St. Andrews in the United Kingdom who was not involved in this project. "Having the ability to learn another individual's name is … not what most animals do. Monkeys have food calls and calls that identify predators, but these are inherited, not learned sounds." The new work "opens the door to understanding the importance of naming." Scientists discovered the dolphins' namelike whistles almost 50 years ago. Since then, researchers have shown that infant dolphins learn their individual whistles from their mothers. A 1986 paper by Tyack did show that a pair of captive male dolphins imitated each others' whistles, and in 2000, Vincent Janik, who is also at St. Andrews, succeeded in recording matching calls among 10 wild dolphins "But without more animals, you couldn't draw a conclusion about what was going on," says Richard Connor, a cetacean biologist at the University of Massachusetts, Dartmouth. Why, after all, would the dolphins need to copy another dolphin's whistle? © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17829 - Posted: 02.20.2013

By Erin Wayman BOSTON — “Birdbrain” may not be much of an insult: Humans and songbirds share genetic changes affecting parts of the brain related to singing and speaking, new research shows. The finding may help scientists better understand how human language evolved, as well as unravel the causes of speech impairments. Neurobiologist Erich Jarvis of Duke University Medical Center in Durham, N.C., and colleagues discovered roughly 80 genes that turn on and off in similar ways in the brains of humans and songbirds such as zebra finches and parakeets. This gene activity, which occurs in brain regions involved in the ability to imitate sounds and to speak and sing, is not present in birds that can’t learn songs or mimic sounds. Jarvis described the work February 15 at the annual meeting of the American Association for the Advancement of Science. Songbirds are good models for language because the birds are born not knowing the songs they will sing as adults. Like human infants learning a specific language, the birds have to observe and imitate others to pick up the tunes they croon. The ancestors of humans and songbirds split some 300 million years ago, suggesting the two groups independently acquired a similar capacity for song. With the new results and other recent research, Jarvis said, “I feel more comfortable that we can link structures in songbird brains to analogous structures in human brains due to convergent evolution.” © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17818 - Posted: 02.18.2013