Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 496

Karen Ravn Babies learn to babble before they learn to talk, at first simply repeating individual syllables (as in ba-ba-ba), and later stringing various syllables together (as in ba-da-goo). Songbirds exhibit similar patterns during song-learning, and the capacity for this sort of syllable sequencing is widely believed to be innate and to emerge full-blown — a theory that is challenged by a paper published on Nature's website today1. A study of three species — zebra finches, Bengalese finches and humans — reports that none of the trio has it that easy. Their young all have to learn how to string syllables together slowly, pair by pair. “We discovered a previously unsuspected stage in human vocal development,” says first author Dina Lipkind, a psychologist now at Hunter College in New York. The researchers began by training young zebra finches (Taeniopygia guttata) to sing a song in which three syllables represented by the letters A, B and C came in the order ABC–ABC. They then trained the birds to sing a second song in which the same syllables were strung together in a different order, ACB–ACB. Eight out of seventeen birds managed to learn the second song, but they did not do so in one fell swoop. They learned it as a series of syllable pairs, first, say, learning to go from A to C, then from C to B and finally from B to A. And they didn’t do it overnight, as the innate-sequencing theory predicts. Instead, on average, they learned the first pair in about ten days, the second in four days and the third in two days. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18208 - Posted: 05.30.2013

By Bruce Bower Chaser isn’t just a 9-year-old border collie with her breed’s boundless energy, intense focus and love of herding virtually anything. She’s a grammar hound. In experiments directed by her owner, psychologist John Pilley of Wofford College in Spartanburg, S.C., Chaser demonstrated her grasp of the basic elements of grammar by responding correctly to commands such as “to ball take Frisbee” and its reverse, “to Frisbee take ball.” The dog had previous, extensive training to recognize classes of words including nouns, verbs and prepositions. “Chaser intuitively discovered how to comprehend sentences based on lots of background learning about different types of words,” Pilley says. He reports the results May 13 in Learning and Motivation. Throughout the first three years of Chaser’s life, Pilley and a colleague trained the dog to recognize and fetch more than 1,000 objects by name. Using praise and play as reinforcements, the researchers also taught Chaser the meaning of different types of words, such as verbs and prepositions. As a result, Chaser learned that phrases such as “to Frisbee” meant that she should take whatever was in her mouth to the named object. Exactly how the dog gained her command of grammar is unclear, however. Pilley suspects that Chaser first mentally linked each of two nouns she heard in a sentence to objects in her memory. Then the canine held that information in mind while deciding which of two objects to bring to which of two other objects. Pilley’s work follows controversial studies of grammar understanding in dolphins and a pygmy chimp. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18184 - Posted: 05.22.2013

by Elizabeth Norton If you've ever cringed when your parents said "groovy," you'll know that spoken language can have a brief shelf life. But frequently used words can persist for generations, even millennia, and similar sounds and meanings often turn up in very different languages. The existence of these shared words, or cognates, has led some linguists to suggest that seemingly unrelated language families can be traced back to a common ancestor. Now, a new statistical approach suggests that peoples from Alaska to Europe may share a linguistic forebear dating as far back as the end of the Ice Age, about 15,000 years ago. "Historical linguists study language evolution using cognates the way biologists use genes," explains Mark Pagel, an evolutionary theorist at the University of Reading in the United Kingdom. For example, although about 50% of French and English words derive from a common ancestor (like "mere" and "mother," for example), with English and German the rate is closer to 70%—indicating that while all three languages are related, English and German have a more recent common ancestor. In the same vein, while humans, chimpanzees, and gorillas have common genes, the fact that humans share almost 99% of their DNA with chimps suggests that these two primate lineages split apart more recently. Because words don't have DNA, researchers use cognates found in different languages today to reconstruct the ancestral "protowords." Historical linguists have observed that over time, the sounds of words tend to change in regular patterns. For example, the p sound frequently changes to f, and the t sound to th—suggesting that the Latin word pater is, well, the father of the English word father. Linguists use these known rules to work backward in time, making a best guess at how the protoword sounded. They also track the rate at which words change. Using these phylogenetic principles, some researchers have dated many common words as far back as 9000 years ago. The ancestral language known as Proto-Indo-European, for example, gave rise to languages including Hindi, Russian, French, English, and Gaelic. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18130 - Posted: 05.07.2013

By Christie Wilcox What does your voice say about you? Our voices communicate information far beyond what we say with our words. Like most animals, the sounds we produce have the potential to convey how healthy we are, what mood we’re in, even our general size. Some of these traits are important cues for potential mates, so much so that the sound of your voice can actually affect how good looking you appear to others. Which, really, brings up one darn good question: what makes a voice sound sexy? To find out, a team spearheaded by University College London researcher Xi Yu created synthetic male and female voices and altered their pitch, vocal quality and formant spacing (an acoustics term related to the frequencies of sound), the last of which is related to body size. They also adjusted the voices to be normal (relaxed), breathy, or pressed (tense). Through several listening experiments, they asked participants of the opposite gender to say which voice was the most attractive and which sounded the friendliest or happiest. The happiest-sounding voices were those with higher pitch, whether male or female, while the angriest were those with dense formants, indicating large body size. As for attractiveness, the men preferred a female voice that is high-pitched, breathy and had wide formant spacing, which indicates a small body size. The women, on the other hand, preferred a male voice with low pitch and dense formant spacing, indicative of larger size. But what really surprised the scientists is that women also preferred their male voices breathy. “The breathiness in the male voice attractiveness rating is intriguing,” explain the authors, “as it could be a way of neutralizing the aggressiveness associated with a large body size.”

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 18091 - Posted: 04.30.2013

Posted by Christy Ullrich Elephants may use a variety of subtle movements and gestures to communicate with one another, according to researchers who have studied the big mammals in the wild for decades. To the casual human observer, a curl of the trunk, a step backward, or a fold of the ear may not have meaning. But to an elephant—and scientists like Joyce Poole—these are signals that convey vital information to individual elephants and the overall herd. Biologist and conservationist Joyce Poole and her husband, Petter Granli, both of whom direct ElephantVoices, a charity they founded to research and advocate for conservation of elephants in various sanctuaries in Africa, have developed an online database decoding hundreds of distinct elephant signals and gestures. The postures and movements underscore the sophistication of elephant communication, they say. Poole and Granli have also deciphered the meaning of acoustic communication in elephants, interpreting the different rumbling, roaring, screaming, trumpeting, and other idiosyncratic sounds that elephants make in concert with postures such as the positioning and flapping of their ears. Poole has studied elephants in Africa for more than 37 years, but only began developing the online gestures database in the past decade. Some of her research and conservation work has been funded by the National Geographic Society. “I noticed that when I would take out guests visiting Amboseli [National Park in Kenya] and was narrating the elephants’ behavior, I got to the point where 90 percent of the time, I could predict what the elephant was about to do,” Poole said in an interview. “If they stood a certain way, they were afraid and were about to retreat, or [in another way] they were angry and were about to move toward and threaten another.” © 1996-2012 National Geographic Society.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18078 - Posted: 04.27.2013

by Tanya Lewis, The lip-smacking vocalizations gelada monkeys make are surprisingly similar to human speech, a new study finds. Many nonhuman primates demonstrate lip-smacking behavior, but geladas are the only ones known to make undulating sounds, known as "wobbles," at the same time. (The wobbling sounds a little like a human hum would sound if the volume were being turned on and off rapidly.) The findings show that lip-smacking could have been an important step in the evolution of human speech, researchers say. "Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," Thore Bergman of the University of Michigan in Ann Arbor and author of the study published today (April 8) in the journal Current Biology,said in a statement. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speechlike sounds." NEWS: Lip Smacks of Monkeys Prelude to Speech? Lip-smacking -- rapidly opening and closing the mouth and lips -- shares some of the features of human speech, such as rapid fluctuations in pitch and volume. (See Video of Gelada Lip-Smacking) Bergman first noticed the similarity while studying geladas in the remote mountains of Ethiopia. He would often hear vocalizations that sounded like human voices, but the vocalizations were actually coming from the geladas, he said. He had never come across other primates who made these sounds. But then he read a study on macaques from 2012 revealing how facial movements during lip-smacking were very speech-like, hinting that lip-smacking might be an initial step toward human speech. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18014 - Posted: 04.10.2013

By Bruce Bower Babies take a critical step toward learning to speak before they can say a word or even babble. By 3 months of age, infants flexibly use three types of sounds — squeals, growls and vowel-like utterances — to express a range of emotions, from positive to neutral to negative, researchers say. Attaching sounds freely to different emotions represents a basic building block of spoken language, say psycholinguist D. Kimbrough Oller of the University of Memphis in Tennessee and his colleagues. Any word or phrase can signal any mental state, depending on context and pronunciation. Infants’ flexible manipulation of sounds to signal how they feel lays the groundwork for word learning, the scientists conclude April 1 in the Proceedings of the National Academy of Sciences. Language evolution took off once this ability emerged in human babies, Oller proposes. Ape and monkey researchers have mainly studied vocalizations that have one meaning, such as distress calls. “At this point, the conservative conclusion is that the human infant at 3 months is already vocally freer than has been demonstrated for any other primate at any age,” Oller says. Oller’s group videotaped infants playing and interacting with their parents in a lab room equipped with toys and furniture. Acoustic analyses identified nearly 7,000 utterances made by infants up to 1 year of age that qualified as laughs, cries, squeals, growls or vowel-like sounds. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17975 - Posted: 04.02.2013

by Lizzie Wade With its complex interweaving of symbols, structure, and meaning, human language stands apart from other forms of animal communication. But where did it come from? A new paper suggests that researchers look to bird songs and monkey calls to understand how human language might have evolved from simpler, preexisting abilities. One reason that human language is so unique is that it has two layers, says Shigeru Miyagawa, a linguist at the Massachusetts Institute of Technology (MIT) in Cambridge. First, there are the words we use, which Miyagawa calls the lexical structure. "Mango," "Amanda," and "eat" are all components of the lexical structure. The rules governing how we put those words together make up the second layer, which Miyagawa calls the expression structure. Take these three sentences: "Amanda eats the mango," "Eat the mango, Amanda," and "Did Amanda eat the mango?" Their lexical structure—the words they use—is essentially identical. What gives the sentences different meanings is the variation in their expression structure, or the different ways those words fit together. The more Miyagawa studied the distinction between lexical structure and expression structure, "the more I started to think, 'Gee, these two systems are really fundamentally different,' " he says. "They almost seem like two different systems that just happen to be put together," perhaps through evolution. One preliminary test of his hypothesis, Miyagawa knew, would be to show that the two systems exist separately in nature. So he started studying the many ways that animals communicate, looking for examples of lexical or expressive structures. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17861 - Posted: 03.02.2013

Regina Nuzzo Despite having brains that are still largely under construction, babies born up to three months before full term can already distinguish between spoken syllables in much the same way that adults do, an imaging study has shown1. Full-term babies — those born after 37 weeks' gestation — display remarkable linguistic sophistication soon after they are born: they recognize their mother’s voice2, can tell apart two languages they’d heard before birth3 and remember short stories read to them while in the womb4. But exactly how these speech-processing abilities develop has been a point of contention. “The question is: what is innate, and what is due to learning immediately after birth?” asks neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France. To answer that, Wallois and his team needed to peek at neural processes already taking place before birth. It is tough to study fetuses, however, so they turned to their same-age peers: babies born 2–3 months premature. At that point, neurons are still migrating to their final destinations; the first connections between upper brain areas are snapping into place; and links have just been forged between the inner ear and cortex. To test these neural pathways, the researchers played soft voices to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain activity using a non-invasive optical imaging technique called functional near-infrared spectroscopy. They were looking for the tell-tale signals of surprise that brains display — for example, when they suddenly hear male and female voices intermingled after hearing a long run of simply female voices. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17852 - Posted: 02.26.2013

By Athena Andreadis Genes are subject to multiple layers of regulation. An early regulatory point is transcription. During this process, regulatory proteins bind to DNA regions (promoters and enhancers) that direct gene expression. These DNA/protein complexes attract the transcription apparatus, which docks next to the complex and proceeds linearly downstream, producing the heteronuclear (hn) RNA that is encoded by the gene linked to the promoter. The hnRNA is then spliced and either becomes structural/regulatory RNA or is translated into protein. Transcription factors are members of large clans that arose from ancestral genes that went through successive duplications and then diverged to fit specific niches. One such family of about fifty members is called FOX. Their DNA binding portion is shaped like a butterfly, which has given this particular motif the monikers of forkhead box or winged helix. The activities of the FOX proteins extend widely in time and region. One of the FOX family members is FOXP2, as notorious as Fox News – except for different reasons: FOXP2 has become entrenched in popular consciousness as “the language gene”. As is the case with all such folklore, there is some truth in this; but as is the case with everything in biology, reality is far more complex. FOXP2, the first gene found to “affect language” (more on this anon), was discovered in 2001 by several converging observations and techniques. The clincher was a large family (code name KE), some of whose members had severe articulation and grammatical deficits with no accompanying sensory or cognitive impairment. The inheritance is autosomal dominant: one copy of the mutated gene is sufficient to confer the trait. When the researchers definitively identified the FOXP2 gene, they found that the version of FOXP2 carried by the KE affected members has a single point mutation that alters an invariant residue in its forkhead domain, thereby influencing the protein’s binding to its DNA targets. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 17845 - Posted: 02.25.2013

Regina Nuzzo Say the word 'rutabaga', and you have just performed a complex dance with many body parts — lips, tongue, jaw and larynx — in a flash of time. Yet little is known about how the brain coordinates these vocal-tract movements to keep even the clumsiest of us from constantly tripping over our own tongues. A study of unprecedented detail now provides a glimpse into the neural codes that control the production of smooth speech. The results help to clarify how the brain uses muscles to organize sounds and hint at why tongue twisters are so tricky. The work is published today in Nature1. Most neural information about the vocal tract has come from watching people with brain damage or from non-invasive imaging methods, neither of which provide detailed data in time or space2, 3. A team of US researchers has now collected brain-activity data on a scale of millimetres and milliseconds. The researchers recorded brain activity in three people with epilepsy using electrodes that had been implanted in the patients' cortices as part of routine presurgical electrophysiological sessions. They then watched to see what happened when the patients articulated a series of syllables. Sophisticated multi-dimensional statistical procedures enabled the researchers to sift through the huge amounts of data and uncover how basic neural building blocks — patterns of neurons firing in different places over time — combine to form the speech sounds of American English. The patterns for consonants were quite different from those for vowels, even though the parts of speech “use the exact same parts of the vocal tract”, says author Edward Chang, a neuroscientist at the University of California, San Francisco. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17838 - Posted: 02.23.2013

by Sara Reardon Like the musicians in an orchestra, our lips, tongue and vocal cords coordinate with one another to pronounce sounds in speech. A map of the brain regions that conduct the process shows how each is carefully controlled – and how mistakes can slip into our speech. It's long been thought that the brain coordinates our speech by simultaneously controlling the movement of these "articulators". In the 1860s, Alexander Melville Bell proposed that speech could be broken down in this way and designed a writing system for deaf people based on the principle. But brain imaging had not had the resolution to see how neurons control these movements – until now. Using electrodes implanted in the brains of three people to treat their epilepsy, Edward Chang and his colleagues at the University of California mapped brain activity in each volunteer's motor cortex as they pronounced words in American English. The team had expected that each speech sound would be controlled by a unique collection of neurons, and so each would map to a different part of the brain. Instead, they found that the same groups of neurons were activated for all sounds. Each group controls muscles in the tongue, lips, jaw and larynx. The neurons – in the sensorimotor cortex – coordinated with one another to fire in different combinations. Each combination resulted in a very precise placing of the articulators to generate a given sound. Surprisingly, although each articulator can theoretically take on an almost limitless range of shapes, the neurons imposed strict limits on the range of possibilities. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17837 - Posted: 02.23.2013

by Michael Balter Despite recent progress toward sexual equality, it's still a man's world in many ways. But numerous studies show that when it comes to language, girls start off with better skills than boys. Now, scientists studying a gene linked to the evolution of vocalizations and language have for the first time found clear sex differences in its activity in both rodents and humans, with the gene making more of its protein in girls. But some researchers caution against drawing too many conclusions about the gene's role in human and animal communication from this study. Back in 2001, the world of language research was rocked by the discovery that a gene called FOXP2 appeared to be essential for the production of speech. Researchers cautioned that FOXP2 is probably only one of many genes involved in human communication, but later discoveries seemed to underscore its importance. For example, the human version of the protein produced by the gene differs by two amino acids from that of chimpanzees, and seems to have undergone natural selection since the human and chimp lineages split between 5 million and 7 million years ago. (Neandertals were found to have the same version as Homo sapiens, fueling speculation that our evolutionary cousins also had language). In the years since, FOXP2 has been implicated in the vocalizations of other animals, including mice, singing birds, and even bats. During this same time period, a number of studies have confirmed past research suggesting that young girls learn language faster and earlier than boys, producing their first words and sentences sooner and accumulating larger vocabularies faster. But the reasons behind such findings are highly controversial because it is difficult to separate the effects of nature versus nurture, and the differences gradually disappear as children get older. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 8: Hormones and Sex; Chapter 15: Language and Our Divided Brain
Link ID: 17830 - Posted: 02.20.2013

by Virginia Morell Every bottlenose dolphin has its own whistle, a high-pitched, warbly "eeee" that tells the other dolphins that a particular individual is present. Dolphins are excellent vocal mimics, too, able to copy even quirky computer-generated sounds. So, scientists have wondered if dolphins can copy each other's signature whistles—which would be very similar to people saying each others' names. Now, an analysis of whistles recorded from hundreds of wild bottlenose dolphins confirms that they can indeed "name" each other, and suggests why they do so—a discovery that may help researchers translate more of what these brainy marine mammals are squeaking, trilling, and clicking about. "It's a wonderful study, really solid," says Peter Tyack, a marine mammal biologist at the University of St. Andrews in the United Kingdom who was not involved in this project. "Having the ability to learn another individual's name is … not what most animals do. Monkeys have food calls and calls that identify predators, but these are inherited, not learned sounds." The new work "opens the door to understanding the importance of naming." Scientists discovered the dolphins' namelike whistles almost 50 years ago. Since then, researchers have shown that infant dolphins learn their individual whistles from their mothers. A 1986 paper by Tyack did show that a pair of captive male dolphins imitated each others' whistles, and in 2000, Vincent Janik, who is also at St. Andrews, succeeded in recording matching calls among 10 wild dolphins "But without more animals, you couldn't draw a conclusion about what was going on," says Richard Connor, a cetacean biologist at the University of Massachusetts, Dartmouth. Why, after all, would the dolphins need to copy another dolphin's whistle? © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17829 - Posted: 02.20.2013

By Erin Wayman BOSTON — “Birdbrain” may not be much of an insult: Humans and songbirds share genetic changes affecting parts of the brain related to singing and speaking, new research shows. The finding may help scientists better understand how human language evolved, as well as unravel the causes of speech impairments. Neurobiologist Erich Jarvis of Duke University Medical Center in Durham, N.C., and colleagues discovered roughly 80 genes that turn on and off in similar ways in the brains of humans and songbirds such as zebra finches and parakeets. This gene activity, which occurs in brain regions involved in the ability to imitate sounds and to speak and sing, is not present in birds that can’t learn songs or mimic sounds. Jarvis described the work February 15 at the annual meeting of the American Association for the Advancement of Science. Songbirds are good models for language because the birds are born not knowing the songs they will sing as adults. Like human infants learning a specific language, the birds have to observe and imitate others to pick up the tunes they croon. The ancestors of humans and songbirds split some 300 million years ago, suggesting the two groups independently acquired a similar capacity for song. With the new results and other recent research, Jarvis said, “I feel more comfortable that we can link structures in songbird brains to analogous structures in human brains due to convergent evolution.” © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17818 - Posted: 02.18.2013

Philip Ball In Fiji, a star is a kalokalo. For the Pazeh people of Taiwan, it is mintol, and for the Melanau people of Borneo, bitén. All these words are thought to come from the same root. But what was it? An algorithm devised by researchers in Canada and California now offers an answer — in this case, bituqen. The program can reconstruct extinct ‘root’ languages from modern ones, a process that has previously been done painstakingly ‘by hand’ using rules of how linguistic sounds tend to change over time. Statistician Alexandre Bouchard-Côté of the University of British Columbia in Vancouver, Canada, and his co-workers say that by making the reconstruction of ancestral languages much simpler, their method should facilitate the testing of hypotheses about how languages evolve. They report their technique in the Proceedings of the National Academy of Sciences1. Automated language reconstruction has been attempted before, but the authors say that earlier algorithms tended to be rather intractable and prescriptive. Bouchard-Côté and colleagues' method can factor in a large number of languages to improve the quality of reconstruction, and it uses rules that handle possible sound changes in flexible, probabilistic ways. The program requires researchers to input a list of words in each language, together with their meanings, and a phylogenetic ‘language tree’ showing how each language is related to the others. Linguists routinely construct such trees using techniques borrowed from evolutionary biology. © 2013 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17786 - Posted: 02.12.2013

By Tia Ghose The identity of a mysterious patient who helped scientists pinpoint the brain region responsible for language has been discovered, researchers report. The finding, detailed in the January issue of the Journal of the History of the Neurosciences, identifies the patient as Louis Leborgne, a French craftsman who battled epilepsy his entire life. In 1840, a wordless patient was admitted to the Bicetre Hospital outside Paris for aphasia, or an inability to speak. He was essentially just kept there, slowly deteriorating. It wasn’t until 1861 that the man, who was known only as “Monsieur Leborgne” and who was nicknamed “Tan” for the only word he could say, came to physician Paul Broca’s ward at the hospital. Leborgne died shortly after the meeting, and Broca performed his autopsy, during which Broca found a lesion in a region of the brain tucked back and up behind the eyes. After doing a detailed examination, Broca concluded that Tan’s aphasia was caused by damage to this region and that the particular brain region controlled speech. That part of the brain was later renamed Broca’s area. At the time, scientists were debating whether different areas of the brain performed separate functions or whether it was an undifferentiated lump that did one task, like the liver, said Marjorie Lorch, a neurolinguist in London who was not involved in the study. © 1996-2013 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17756 - Posted: 02.05.2013

By Stephen Ornes New babies eat, sleep, cry, poop — and listen. But their eavesdropping begins before birth and may include language lessons, says a new study. Scientists believe such early learning may help babies quickly understand their parents. Christine Moon is a psychologist at Pacific Lutheran University in Tacoma, Wash. She led the new study, to be published in February. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” she told Science News. A prenatal event happens before birth. Scientists have known that about 10 weeks before birth, a fetus can hear sounds outside the womb. Those sounds include the volume and rhythm of a person’s voice. But Moon found evidence that fetuses may also be starting to learn language itself. Moon and her coworkers tested whether newborns could detect differences in vowel sounds. These sounds are the loudest in human speech. Her team reports that newborns responded one way when they heard sounds like those from their parents’ language. And the newborns responded another way when they heard sounds like those from a foreign language. This was true among U.S. and Swedish babies who listened to sounds similar to English vowels and Swedish vowels. These responses show that shortly after birth, babies can group together familiar speech sounds, Moon told Science News. © 2013 Copyright Science News for Kids

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17710 - Posted: 01.26.2013

Some animals are more eloquent than previously thought and have a communication structure similar to the vowel and consonant system of humans, according to new research. Studying the abbreviated call of the mongoose, researchers at the University of Zurich have found they are the first animals to communicate with sound units that are even smaller than syllables and yet still contain information about who is calling and why. Usually, animals can only produce a limited number of distinguishable sounds and calls due to their anatomy. While whale and bird songs are a little more complex than most animal sounds — in that they are repeatedly combined with new arrangements — they don’t pattern themselves after human syllables with their combination of vowels and consonants. Studying wild banded mongooses in Uganda, behavioural biologists discovered that the calls of the animals are structured and contain different information — a sound structure that has some similarities to the vowel and consonant system of human speech. Banded mongooses live in the savannah regions of the Sahara. They are small predators that live in groups of around 20 and are related to the meerkat. The scientists recorded calls of the mongoose and made acoustic analyses of them. The calls, which last between 50 and 150 milliseconds, could be compared to one "syllable," the researchers found. © CBC 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17672 - Posted: 01.12.2013

By Bruce Bower Babies may start to learn their mother tongues even before seeing their mothers’ faces. Newborns react differently to native and foreign vowel sounds, suggesting that language learning begins in the womb, researchers say. Infants tested seven to 75 hours after birth treated spoken variants of a vowel sound in their home language as similar, evidence that newborns regard these sounds as members of a common category, say psychologist Christine Moon of Pacific Lutheran University in Tacoma, Wash., and her colleagues. Newborns deemed different versions of a foreign vowel sound to be dissimilar and unfamiliar, the scientists report in an upcoming Acta Paediatrica. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” Moon says. Fetuses can hear outside sounds by about 10 weeks before birth. Until now, evidence suggested that prenatal learning was restricted to the melody, rhythm and loudness of voices (SN: 12/5/09, p. 14). Earlier investigations established that 6-month-olds group native but not foreign vowel sounds into categories. Moon and colleagues propose that, in the last couple months of gestation, babies monitor at least some vowels — the loudest and most expressive speech sounds — uttered by their mothers. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17662 - Posted: 01.08.2013