Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 509

By Joshua K. Hartshorne There are two striking features of language that any scientific theory of this quintessentially human behavior must account for. The first is that we do not all speak the same language. This would be a shocking observation were not so commonplace. Communication systems and other animals tend to be universal, with any animal of the species able to communicate with any other. Likewise, many other fundamental human attributes show much less variation. Barring genetic or environmental mishap, we all have two eyes, one mouth, and four limbs. Around the world, we cry when we are sad, smile when we are happy, and laugh when something is funny, but the languages we use to describe this are different. The second striking feature of language is that when you consider the space of possible languages, most languages are clustered in a few tiny bands. That is, most languages are much, much more similar to one another than random variation would have predicted. Starting with pioneering work by Joseph Greenberg, scholars have cataloged over two thousand linguistic universals (facts true of all languages) and biases (facts true of most languages). For instance, in languages with fixed word order, the subject almost always comes before the object. If the verb describes a caused event, the entity that caused the event is the subject ("John broke the vase") not the object (for example, "The vase shbroke John" meaning "John broke the vase"). In languages like English where the verb agrees with one of its subjects or objects, it typically agrees with the subject (compare "the child eats the carrots" with "the children eat the carrots") and not with its object (this would look like "the child eats the carrot" vs. "the child eat the carrots"), though in some languages, like Hungarian, the ending of the verb changes to match both the subject and object. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18664 - Posted: 09.18.2013

By Jason G. Goldman One of the key differences between humans and non-human animals, it is thought, is the ability to flexibly communicate our thoughts to others. The consensus has long been that animal communication, such as the food call of a chimpanzee or the alarm call of a lemur, is the result of an automatic reflex guided primarily by the inner physiological state of the animal. Chimpanzees, for example, can’t “lie” by producing a food call when there’s no food around and, it is thought, they can’t not emit a food call in an effort to hoard it all for themselves. By contrast, human communication via language is far more flexible and intentional. But recent research from across the animal kingdom has cast some doubt on the idea that animal communication always operates below the level of conscious control. Male chickens, for example, call more when females are around, and male Thomas langurs (a monkey native to Indonesia) continue shrieking their alarm calls until all females in their group have responded. Similarly, vervet monkeys are more likely sound their alarm calls when their are other vervet monkeys around, and less likely when they’re alone. The same goes for meerkats. And possibly chimps, as well. Still, these sorts of “audience effects” can be explained by lower-level physiological factors. In yellow-bellied marmots, small ground squirrels native to the western US and southwestern Canada, the production of an alarm call correlates with glucocorticoid production, a physiological measurement of stress. And when researchers experimentally altered the synthesis of glucocorticoids in rhesus macaques, they found a change in the probability of alarm call production. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18605 - Posted: 09.04.2013

by Jacob Aron DOES your brain work like a dictionary? A mathematical analysis of the connections between definitions of English words has uncovered hidden structures that may resemble the way words and their meanings are represented in our heads. "We want to know how the mental lexicon is represented in the brain," says Stevan Harnad of the University of Quebec in Montreal, Canada. As every word in a dictionary is defined in terms of others, the knowledge needed to understand the entire lexicon is there, as long as you first know the meanings of an initial set of starter, or "grounding", words. Harnad's team reasoned that finding this minimal set of words and pinning down its structure might shed light on how human brains put language together. The team converted each of four different English dictionaries into a mathematical structure of linked nodes known as a graph. Each node in this graph represents a word, which is linked to the other words used to define it – so "banana" might be connected to "long", "bendy", "yellow" and "fruit". These words then link to others that define them. This enabled the team to remove all the words that don't define any others, leaving what they call a kernel. The kernel formed roughly 10 per cent of the full dictionary – though the exact percentages depended on the particular dictionary. In other words, 90 per cent of the dictionary can be defined using just the other 10 per cent. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18587 - Posted: 08.31.2013

Beth Skwarecki Be careful what you say around a pregnant woman. As a fetus grows inside a mother's belly, it can hear sounds from the outside world—and can understand them well enough to retain memories of them after birth, according to new research. It may seem implausible that fetuses can listen to speech within the womb, but the sound-processing parts of their brain become active in the last trimester of pregnancy, and sound carries fairly well through the mother's abdomen. "If you put your hand over your mouth and speak, that's very similar to the situation the fetus is in," says cognitive neuroscientist Eino Partanen of the University of Helsinki. "You can hear the rhythm of speech, rhythm of music, and so on." A 1988 study suggested that newborns recognize the theme song from their mother's favorite soap opera. More recent studies have expanded on the idea of fetal learning, indicating that newborns already familiarized themselves with sounds of their parent’s native language; one showed that American newborns seem to perceive Swedish vowel sounds as unfamiliar, sucking on a high-tech pacifier to hear more of the new sounds. Swedish infants showed the same response to English vowels. But those studies were based on babies' behaviors, which can be tricky to test. Partanen and his team decided instead to outfit babies with EEG sensors to look for neural traces of memories from the womb. "Once we learn a sound, if it's repeated to us often enough, we form a memory of it, which is activated when we hear the sound again," he explains. This memory speeds up recognition of sounds in the learner's native language and can be detected as a pattern of brain waves, even in a sleeping baby. © 2012 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18570 - Posted: 08.27.2013

By Glen Tellis, Rickson C. Mesquita, and Arjun G. Yodh Terrence Murgallis, a 20 year-old undergraduate student in the Department of Speech-Language Pathology at Misericordia University has stuttered all his life and approached us recently about conducting brain research on stuttering. His timing was perfect because our research group, in collaboration with a team led by Dr. Arjun Yodh in the Department of Physics and Astronomy at the University of Pennsylvania, had recently deployed two novel optical methods to compare blood flow and hemoglobin concentration differences in the brains of those who stutter with those who are fluent. These noninvasive methods employ diffusing near-infrared light and have been dubbed near-infrared spectroscopy (NIRS) for concentration dynamics, and diffuse correlation spectroscopy (DCS) for flow dynamics. The near-infrared light readily penetrates through intact skull to probe cortical regions of the brain. The low power light has no known side-effects and has been successfully utilized for a variety of clinical studies in infants, children, and adults. DCS measures fluctuations of scattered light due to moving targets in the tissue (mostly red blood cells). The technique measures relative changes in cerebral blood flow. NIRS uses the relative transmission of different colors of light to detect hemoglobin concentration changes in the interrogated tissues. Though there are numerous diagnostic tools available to study brain activity, including positron emission tomography (PET), magnetic resonance imaging (MRI), and magnetoencephalography (MEG), these methods are often invasive and/or expensive to administer. In the particular case of electroencephalography (EEG), its low spatial resolution is a significant limitation for investigations of verbal fluency. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 5: The Sensorimotor System
Link ID: 18426 - Posted: 07.30.2013

By Michelle Warwicker BBC Nature Individual wild wolves can be recognised by just their howls with 100% accuracy, a study has shown. The team from Nottingham Trent University, UK, developed a computer program to analyse the vocal signatures of eastern grey wolves. Wolves roam huge home ranges, making it difficult for conservationists to track them visually. But the technology could provide a way for experts to monitor individual wolves by sound alone. "Wolves howl a lot in the wild," said PhD student Holly Root-Gutteridge, who led the research. "Now we can be sure... exactly which wolf it is that's howling." The team's findings are published in the journal Bioacoustics. Wolves use their distinctive calls to protect territory from rivals and to call to other pack members. "They enjoy it as a group activity," said Ms Root-Gutteridge, "When you get a chorus howl going they all join in." The team's computer program is unique because it analyses both volume (or amplitude) and pitch (or frequency) of wolf howls, whereas previously scientists had only examined the animals' pitch. "Think of [pitch] as the note the wolf is singing," explained Ms Root-Gutteridge. "What we've added now is the amplitude - or volume - which is basically how loud it's singing at different times." "It's a bit like language: If you put the stress in different places you form a different sound." BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18401 - Posted: 07.23.2013

By Rebecca Morelle Science reporter, BBC World Service Scientists have found further evidence that dolphins call each other by "name". Research has revealed that the marine mammals use a unique whistle to identify each other. A team from the University of St Andrews in Scotland found that when the animals hear their own call played back to them, they respond. The study is published in the Proceedings of the National Academy of Sciences. Dr Vincent Janik, from the university's Sea Mammal Research Unit, said: "(Dolphins) live in this three-dimensional environment, offshore without any kind of landmarks and they need to stay together as a group. "These animals live in an environment where they need a very efficient system to stay in touch." It had been-long suspected that dolphins use distinctive whistles in much the same way that humans use names. Previous research found that these calls were used frequently, and dolphins in the same groups were able to learn and copy the unusual sounds. But this is the first time that the animals response to being addressed by their "name" has been studied. To investigate, researchers recorded a group of wild bottlenose dolphins, capturing each animal's signature sound. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18400 - Posted: 07.23.2013

By NICHOLAS BAKALAR There are many dying languages in the world. But at least one has recently been born, created by children living in a remote village in northern Australia. Carmel O’Shannessy, a linguist at the University of Michigan, has been studying the young people’s speech for more than a decade and has concluded that they speak neither a dialect nor the mixture of languages called a creole, but a new language with unique grammatical rules. The language, called Warlpiri rampaku, or Light Warlpiri, is spoken only by people under 35 in Lajamanu, an isolated village of about 700 people in Australia’s Northern Territory. In all, about 350 people speak the language as their native tongue. Dr. O’Shannessy has published several studies of Light Warlpiri, the most recent in the June issue of Language. “Many of the first speakers of this language are still alive,” said Mary Laughren, a research fellow in linguistics at the University of Queensland in Australia, who was not involved in the studies. One reason Dr. O’Shannessy’s research is so significant, she said, “is that she has been able to record and document a ‘new’ language in the very early period of its existence.” Everyone in Lajamanu also speaks “strong” Warlpiri, an aboriginal language unrelated to English and shared with about 4,000 people in several Australian villages. Many also speak Kriol, an English-based creole developed in the late 19th century and widely spoken in northern Australia among aboriginal people of many different native languages. Lajamanu parents are happy to have their children learn English in school for use in the wider world, but eager to preserve Warlpiri as the language of their culture. There is an elementary school in Lajamanu, but most children go to boarding school in Darwin for secondary education. The language there is English. But they continue to speak Light Warlpiri among themselves. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18376 - Posted: 07.15.2013

By TIM REQUARTH and MEEHAN CRIST Babies learn to speak months after they begin to understand language. As they are learning to talk, they babble, repeating the same syllable (“da-da-da”) or combining syllables into a string (“da-do-da-do”). But when babies babble, what are they actually doing? And why does it take them so long to begin speaking? Insights into these mysteries of human language acquisition are now coming from a surprising source: songbirds. Researchers who focus on infant language and those who specialize in birdsong have teamed up in a new study suggesting that learning the transitions between syllables — from “da” to “do” and “do” to “da” — is the crucial bottleneck between babbling and speaking. “We’ve discovered a previously unidentified component of vocal development,” said the lead author, Dina Lipkind, a psychology researcher at Hunter College in Manhattan. “What we’re showing is that babbling is not only to learn sounds, but also to learn transitions between sounds.” The results provide insight into language acquisition and may eventually help shed light on human speech disorders. “Every time you find out something fundamental about the way development works, you gain purchase on what happens when children are at risk for disorder,” said D. Kimbrough Oller, a language researcher at the University of Memphis, who was not involved in the study. At first, however, the scientists behind these findings weren’t studying human infants at all. They were studying birds. “When I got into this, I never believed we were going to learn about human speech,” said Ofer Tchernichovski, a birdsong researcher at Hunter and the senior author of the study, published online on May 29 in the journal Nature. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18332 - Posted: 07.01.2013

Did that prairie dog just call you fat? Quite possibly. On The Current Friday, biologist Con Slobodchikoff described how he learned to understand what prairie dogs are saying to one another and discovered how eloquent they can be. Slobodchikoff, a professor emeritus at North Arizona University, told Erica Johnson, guest host of The Current, that he started studying prairie dog language 30 years ago after scientists reported that other ground squirrels had different alarm calls to warn each other of flying predators such as hawks and eagles, versus predators on the ground, such as coyotes or badgers. Prairie dogs, he said, were ideal animals to study because they are social animals that live in small co-operative groups within a larger colony, or "town" and they never leave their colony or territory, where they have built an elaborate underground complex of tunnels and burrows. In order to figure out what the prairie dogs were saying, Slobodchikoff and his colleagues trapped them and painted them with fur dye to identify each one. Then they recorded the animals' calls in the presence of different predators. They found that the animals make distinctive calls that can distinguish between a wide variety of animals, including coyotes, domestic dogs and humans. The patterns are so distinct, Slobodchikoff said, that human visitors that he brings to a prairie dog colony can typically learn them within two hours. But then Slobodchikoff noticed that the animals made slightly different calls when different individuals of the same species went by. © CBC 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18300 - Posted: 06.22.2013

by Emily Underwood Something odd happened when Shu Zhang was giving a presentation to her classmates at the Columbia Business School in New York City. Zhang, a Chinese native, spoke fluent English, yet in the middle of her talk, she glanced over at her Chinese professor and suddenly blurted out a word in Mandarin. "I meant to say a transition word like 'however,' but used the Chinese version instead," she says. "It really shocked me." Shortly afterward, Zhang teamed up with Columbia social psychologist Michael Morris and colleagues to figure out what had happened. In a new study, they show that reminders of one's homeland can hinder the ability to speak a new language. The findings could help explain why cultural immersion is the most effective way to learn a foreign tongue and why immigrants who settle within an ethnic enclave acculturate more slowly than those who surround themselves with friends from their new country. Previous studies have shown that cultural icons such as landmarks and celebrities act like "magnets of meaning," instantly activating a web of cultural associations in the mind and influencing our judgments and behavior, Morris says. In an earlier study, for example, he asked Chinese Americans to explain what was happening in a photograph of several fish, in which one fish swam slightly ahead of the others. Subjects first shown Chinese symbols, such as the Great Wall or a dragon, interpreted the fish as being chased. But individuals primed with American images of Marilyn Monroe or Superman, in contrast, tended to interpret the outlying fish as leading the others. This internally driven motivation is more typical of individualistic American values, some social psychologists say, whereas the more externally driven explanation of being pursued is more typical of Chinese culture. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18283 - Posted: 06.18.2013

// by Jennifer Viegas It goes a little something like this: A young male zebra finch, whose father taught him a song, shared that song with a brother, with the two youngsters then creating new tunes based on dad’s signature sound. The musical bird family, described in the latest Biology Letters, strengthens evidence that imitation between siblings and similar-aged youngsters facilitates vocal learning. The theory could help to explain why families with multiple same sex siblings, such as the Bee Gees and the Jackson 5, often form such successful musical groups. Co-author Sébastien Derégnaucourt told Discovery News that, among humans, “infants have a visual preference for peers of the same age, which may facilitate imitation.” He added that it’s also “known that children can have an impact on each other’s language acquisition, such as in the case of the emergence of creole languages, whether spoken or signed, among children exposed to pidgin (a grammatically simplified form of a language).” Pidgin in this case is more like pigeon, since the study focused on birds. Derégnaucourt, an associate professor at University Paris West, collaborated with Manfred Gahr of the Max Planck Institute for Ornithology. The two researchers studied how the young male zebra finch from a bird colony in Germany learned from his avian dad. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 18260 - Posted: 06.12.2013

by Tia Ghose, LiveScience Ape and human infants at comparable stages of development use similar gestures, such as pointing or lifting their arms to be picked up, new research suggests. Chimpanzee, bonobo and human babies rely mainly on gestures at about a year old, and gradually develop symbolic language (words, for human babies; and signs, for apes) as they get older. The findings suggest that “gesture plays an important role in the evolution of language, because it preceded language use across the species," said study co-author Kristen Gillespie-Lynch, a developmental psychologist at the College of Staten Island in New York. The idea that language arose from gesture and a primitive sign language has a long history. French philosopher Étienne Bonnot de Condillac proposed the idea in 1746, and other scientists have noted that walking on two legs, which frees up the hands for gesturing, occurred earlier in human evolution than changes to the vocal tract that enabled speaking. But although apes in captivity can learn some language by learning from humans, in the wild, they don't gesture nearly as much as human infants, making it difficult to tease out commonalities in language development that have biological versus environmental roots. © 2013 Discovery Communications, LLC

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18247 - Posted: 06.08.2013

Karen Ravn Babies learn to babble before they learn to talk, at first simply repeating individual syllables (as in ba-ba-ba), and later stringing various syllables together (as in ba-da-goo). Songbirds exhibit similar patterns during song-learning, and the capacity for this sort of syllable sequencing is widely believed to be innate and to emerge full-blown — a theory that is challenged by a paper published on Nature's website today1. A study of three species — zebra finches, Bengalese finches and humans — reports that none of the trio has it that easy. Their young all have to learn how to string syllables together slowly, pair by pair. “We discovered a previously unsuspected stage in human vocal development,” says first author Dina Lipkind, a psychologist now at Hunter College in New York. The researchers began by training young zebra finches (Taeniopygia guttata) to sing a song in which three syllables represented by the letters A, B and C came in the order ABC–ABC. They then trained the birds to sing a second song in which the same syllables were strung together in a different order, ACB–ACB. Eight out of seventeen birds managed to learn the second song, but they did not do so in one fell swoop. They learned it as a series of syllable pairs, first, say, learning to go from A to C, then from C to B and finally from B to A. And they didn’t do it overnight, as the innate-sequencing theory predicts. Instead, on average, they learned the first pair in about ten days, the second in four days and the third in two days. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18208 - Posted: 05.30.2013

By Bruce Bower Chaser isn’t just a 9-year-old border collie with her breed’s boundless energy, intense focus and love of herding virtually anything. She’s a grammar hound. In experiments directed by her owner, psychologist John Pilley of Wofford College in Spartanburg, S.C., Chaser demonstrated her grasp of the basic elements of grammar by responding correctly to commands such as “to ball take Frisbee” and its reverse, “to Frisbee take ball.” The dog had previous, extensive training to recognize classes of words including nouns, verbs and prepositions. “Chaser intuitively discovered how to comprehend sentences based on lots of background learning about different types of words,” Pilley says. He reports the results May 13 in Learning and Motivation. Throughout the first three years of Chaser’s life, Pilley and a colleague trained the dog to recognize and fetch more than 1,000 objects by name. Using praise and play as reinforcements, the researchers also taught Chaser the meaning of different types of words, such as verbs and prepositions. As a result, Chaser learned that phrases such as “to Frisbee” meant that she should take whatever was in her mouth to the named object. Exactly how the dog gained her command of grammar is unclear, however. Pilley suspects that Chaser first mentally linked each of two nouns she heard in a sentence to objects in her memory. Then the canine held that information in mind while deciding which of two objects to bring to which of two other objects. Pilley’s work follows controversial studies of grammar understanding in dolphins and a pygmy chimp. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18184 - Posted: 05.22.2013

by Elizabeth Norton If you've ever cringed when your parents said "groovy," you'll know that spoken language can have a brief shelf life. But frequently used words can persist for generations, even millennia, and similar sounds and meanings often turn up in very different languages. The existence of these shared words, or cognates, has led some linguists to suggest that seemingly unrelated language families can be traced back to a common ancestor. Now, a new statistical approach suggests that peoples from Alaska to Europe may share a linguistic forebear dating as far back as the end of the Ice Age, about 15,000 years ago. "Historical linguists study language evolution using cognates the way biologists use genes," explains Mark Pagel, an evolutionary theorist at the University of Reading in the United Kingdom. For example, although about 50% of French and English words derive from a common ancestor (like "mere" and "mother," for example), with English and German the rate is closer to 70%—indicating that while all three languages are related, English and German have a more recent common ancestor. In the same vein, while humans, chimpanzees, and gorillas have common genes, the fact that humans share almost 99% of their DNA with chimps suggests that these two primate lineages split apart more recently. Because words don't have DNA, researchers use cognates found in different languages today to reconstruct the ancestral "protowords." Historical linguists have observed that over time, the sounds of words tend to change in regular patterns. For example, the p sound frequently changes to f, and the t sound to th—suggesting that the Latin word pater is, well, the father of the English word father. Linguists use these known rules to work backward in time, making a best guess at how the protoword sounded. They also track the rate at which words change. Using these phylogenetic principles, some researchers have dated many common words as far back as 9000 years ago. The ancestral language known as Proto-Indo-European, for example, gave rise to languages including Hindi, Russian, French, English, and Gaelic. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18130 - Posted: 05.07.2013

By Christie Wilcox What does your voice say about you? Our voices communicate information far beyond what we say with our words. Like most animals, the sounds we produce have the potential to convey how healthy we are, what mood we’re in, even our general size. Some of these traits are important cues for potential mates, so much so that the sound of your voice can actually affect how good looking you appear to others. Which, really, brings up one darn good question: what makes a voice sound sexy? To find out, a team spearheaded by University College London researcher Xi Yu created synthetic male and female voices and altered their pitch, vocal quality and formant spacing (an acoustics term related to the frequencies of sound), the last of which is related to body size. They also adjusted the voices to be normal (relaxed), breathy, or pressed (tense). Through several listening experiments, they asked participants of the opposite gender to say which voice was the most attractive and which sounded the friendliest or happiest. The happiest-sounding voices were those with higher pitch, whether male or female, while the angriest were those with dense formants, indicating large body size. As for attractiveness, the men preferred a female voice that is high-pitched, breathy and had wide formant spacing, which indicates a small body size. The women, on the other hand, preferred a male voice with low pitch and dense formant spacing, indicative of larger size. But what really surprised the scientists is that women also preferred their male voices breathy. “The breathiness in the male voice attractiveness rating is intriguing,” explain the authors, “as it could be a way of neutralizing the aggressiveness associated with a large body size.”

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 18091 - Posted: 04.30.2013

Posted by Christy Ullrich Elephants may use a variety of subtle movements and gestures to communicate with one another, according to researchers who have studied the big mammals in the wild for decades. To the casual human observer, a curl of the trunk, a step backward, or a fold of the ear may not have meaning. But to an elephant—and scientists like Joyce Poole—these are signals that convey vital information to individual elephants and the overall herd. Biologist and conservationist Joyce Poole and her husband, Petter Granli, both of whom direct ElephantVoices, a charity they founded to research and advocate for conservation of elephants in various sanctuaries in Africa, have developed an online database decoding hundreds of distinct elephant signals and gestures. The postures and movements underscore the sophistication of elephant communication, they say. Poole and Granli have also deciphered the meaning of acoustic communication in elephants, interpreting the different rumbling, roaring, screaming, trumpeting, and other idiosyncratic sounds that elephants make in concert with postures such as the positioning and flapping of their ears. Poole has studied elephants in Africa for more than 37 years, but only began developing the online gestures database in the past decade. Some of her research and conservation work has been funded by the National Geographic Society. “I noticed that when I would take out guests visiting Amboseli [National Park in Kenya] and was narrating the elephants’ behavior, I got to the point where 90 percent of the time, I could predict what the elephant was about to do,” Poole said in an interview. “If they stood a certain way, they were afraid and were about to retreat, or [in another way] they were angry and were about to move toward and threaten another.” © 1996-2012 National Geographic Society.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18078 - Posted: 04.27.2013

by Tanya Lewis, The lip-smacking vocalizations gelada monkeys make are surprisingly similar to human speech, a new study finds. Many nonhuman primates demonstrate lip-smacking behavior, but geladas are the only ones known to make undulating sounds, known as "wobbles," at the same time. (The wobbling sounds a little like a human hum would sound if the volume were being turned on and off rapidly.) The findings show that lip-smacking could have been an important step in the evolution of human speech, researchers say. "Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," Thore Bergman of the University of Michigan in Ann Arbor and author of the study published today (April 8) in the journal Current Biology,said in a statement. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speechlike sounds." NEWS: Lip Smacks of Monkeys Prelude to Speech? Lip-smacking -- rapidly opening and closing the mouth and lips -- shares some of the features of human speech, such as rapid fluctuations in pitch and volume. (See Video of Gelada Lip-Smacking) Bergman first noticed the similarity while studying geladas in the remote mountains of Ethiopia. He would often hear vocalizations that sounded like human voices, but the vocalizations were actually coming from the geladas, he said. He had never come across other primates who made these sounds. But then he read a study on macaques from 2012 revealing how facial movements during lip-smacking were very speech-like, hinting that lip-smacking might be an initial step toward human speech. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18014 - Posted: 04.10.2013

By Bruce Bower Babies take a critical step toward learning to speak before they can say a word or even babble. By 3 months of age, infants flexibly use three types of sounds — squeals, growls and vowel-like utterances — to express a range of emotions, from positive to neutral to negative, researchers say. Attaching sounds freely to different emotions represents a basic building block of spoken language, say psycholinguist D. Kimbrough Oller of the University of Memphis in Tennessee and his colleagues. Any word or phrase can signal any mental state, depending on context and pronunciation. Infants’ flexible manipulation of sounds to signal how they feel lays the groundwork for word learning, the scientists conclude April 1 in the Proceedings of the National Academy of Sciences. Language evolution took off once this ability emerged in human babies, Oller proposes. Ape and monkey researchers have mainly studied vocalizations that have one meaning, such as distress calls. “At this point, the conservative conclusion is that the human infant at 3 months is already vocally freer than has been demonstrated for any other primate at any age,” Oller says. Oller’s group videotaped infants playing and interacting with their parents in a lab room equipped with toys and furniture. Acoustic analyses identified nearly 7,000 utterances made by infants up to 1 year of age that qualified as laughs, cries, squeals, growls or vowel-like sounds. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17975 - Posted: 04.02.2013