Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 541

Bill McQuay The natural world is abuzz with the sound of animals communicating — crickets, birds, even grunting fish. But scientists learning to decode these sounds say the secret signals of African elephants — their deepest rumblings — are among the most intriguing calls any animal makes. Katy Payne, the same biologist who recognized song in the calls of humpback whales in the 1960s, went on to help create the Elephant Listening Project in the Central African Republic in the 1980s. At the time, Payne's team was living in shacks in a dense jungle inhabited by hundreds of rare forest elephants. That's where one of us — Bill McQuay — first encountered the roar of an elephant in 2002, while reporting a story for an NPR-National Geographic collaboration called Radio Expeditions. Here's how Bill remembers that day in Africa: I was walking through this rainforest to an observation platform built up in a tree — out of the reach of the elephants. I climbed up onto the platform, a somewhat treacherous exercise with all my recording gear. Then I set up my recording equipment, put on the headphones, and started listening. That first elephant roar sounded close. But I was so focused on the settings on my recorder that I didn't bother to look around. The second roar sounded a lot closer. I thought, this is so cool! What I didn't realize was, there was this huge bull elephant standing right underneath me — pointing his trunk up at me, just a few feet away. Apparently he was making a "dominance display." © 2015 NPR

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21319 - Posted: 08.20.2015

Alexander Christie-Miller You could say they sent the first tweets. An ancient whistling language that sounds a little like birdsong has been found to use both sides of the brain – challenging the idea that the left side is all important for communicating. The whistling language is still used by around 10,000 people in the mountains of north-east Turkey, and can carry messages as far as 5 kilometres. Researchers have now shown that this language involves the brain’s right hemisphere, which was already known to be important for understanding music. Until recently, it was thought that the task of interpreting language fell largely to the brain’s left hemisphere. Onur Güntürkün of Ruhr University Bochum in Germany wondered whether the musical melodies and frequencies of whistled Turkish might require people to use both sides of their brain to communicate. His team tested 31 fluent whistlers by playing slightly different spoken or whistled syllables into their left and right ears at the same time, and asking them to say what they heard. The left hemisphere depends slightly more on sounds received by the right ear, and vice versa for the right hemisphere. By comparing the number of times the whistlers reported the syllables that had been played into either their right or left ear, they could tell how often each side of the brain was dominant. As expected, when the syllables were spoken, the right ear and left hemisphere were dominant 75 per cent of the time. But when syllables were whistled, the split between right and left dominance was about even. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21309 - Posted: 08.18.2015

By Perri Klass, A little more than a year ago, the American Academy of Pediatrics issued a policy statement saying that all pediatric primary care should include literacy promotion, starting at birth. That means pediatricians taking care of infants and toddlers should routinely be advising parents about how important it is to read to even very young children. The policy statement, which I wrote with Dr. Pamela C. High, included a review of the extensive research on the links between growing up with books and reading aloud, and later language development and school success. But while we know that reading to a young child is associated with good outcomes, there is only limited understanding of what the mechanism might be. Two new studies examine the unexpectedly complex interactions that happen when you put a small child on your lap and open a picture book. This month, the journal Pediatrics published a study that used functional magnetic resonance imaging to study brain activity in 3-to 5-year-old children as they listened to age-appropriate stories. The researchers found differences in brain activation according to how much the children had been read to at home. Children whose parents reported more reading at home and more books in the home showed significantly greater activation of brain areas in a region of the left hemisphere called the parietal-temporal-occipital association cortex. This brain area is “a watershed region, all about multisensory integration, integrating sound and then visual stimulation,” said the lead author, Dr. John S. Hutton, a clinical research fellow at Cincinnati Children’s Hospital Medical Center. This region of the brain is known to be very active when older children read to themselves, but Dr. Hutton notes that it also lights up when younger children are hearing stories. What was especially novel was that children who were exposed to more books and home reading showed significantly more activity in the areas of the brain that process visual association, even though the child was in the scanner just listening to a story and could not see any pictures. © 2015 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21308 - Posted: 08.18.2015

Geoff Brumfiel Learning to make sounds by listening to others is a skill that helps make us human. But research now suggests a species of monkey may have evolved similar abilities. Marmosets have the capacity to learn calls from their parents, according to research published Thursday in the journal Science. The results mean that studying marmosets might provide insights into developmental disorders found in humans. It also suggests that vocal learning may be more widespread than many researchers thought. Many animals can link sounds with meaning. Dogs respond to simple calls; chimpanzees can even communicate with people using sign language. But the ability to hear a sound and mimic it is possessed by only a small number of species: primarily song birds and humans. "We didn't think that mammals and primates in particular — besides us — had any type of vocal learning," says Asif Ghazanfar, a neuroscientist at Princeton University who led the new study. Enter the small, adorable common marmoset. These fuzzy South American primates look more like squirrels than a monkey. "They're cute, and they smell. They wash themselves in their own urine," Ghazanfar says. "I'm not sure why they do that." But once you get over the stink, these little guys are interesting. Marmoset mommies always give birth to twins and they need help rearing them. So, unlike many mammal species, fathers lend a hand, along with siblings and other community members. Ghazanfar thinks all that child care is what gives marmosets another special trait: They're super talkative. "They're chattering nonstop," he says. "That is also very different from our close relatives the chimpanzees." © 2015 NPR

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 21300 - Posted: 08.15.2015

// by Richard Farrell Bonobos have a capacity to do something human infants have been shown to do: use a single sound whose meaning varies based on context, a form of "flexible" communication previously thought specific to humans. The finding was made by researchers from the University of Birmingham and the University of Neuchatel, in a paper just published in the journal Peer J. The newly identified bonobo call is a short, high-pitched "peep," made with a closed mouth. The scientists studied the call's acoustic structure and observed that it did not change between what they termed "neutral" and "positive" circumstances (for example, between activities such as feeding or resting), suggesting that other bonobos receiving the call would need to weigh contextual information to discern its meaning. Human babies do something similarly flexible, using sounds called protophones -- different from highly specific sounds such as crying or laughter -- that are made independent of how they are feeling emotionally. The appearance of this capability in the first year of life is "a critical step in the development of vocal language and may have been a critical step in the evolution of human language," an earlier study on infant vocalization noted. The find challenges the idea that calls from primates such as bonobos -- which, along with chimpanzees, are our closest relatives -- are strictly matched with specific contexts and emotions, whether those sounds are territorial barks or shrieks of alarm. © 2015 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21265 - Posted: 08.05.2015

By Michael Balter Have you ever wondered why you say “The boy is playing Frisbee with his dog” instead of “The boy dog his is Frisbee playing with”? You may be trying to give your brain a break, according to a new study. An analysis of 37 widely varying tongues finds that, despite the apparent great differences among them, they share what might be a universal feature of human language: All of them have evolved to make communication as efficient as possible. Earth is a veritable Tower of Babel: Up to 7000 languages are still spoken across the globe, belonging to roughly 150 language families. And they vary widely in the way they put sentences together. For example, the three major building blocks of a sentence, subject (S), verb (V), and object (O), can come in three different orders. English and French are SVO languages, whereas German and Japanese are SOV languages; a much smaller number, such as Arabic and Hebrew, use the VSO order. (No well-documented languages start sentences or clauses with the object, although some linguists have jokingly suggested that Klingon might do so.) Yet despite these different ways of structuring sentences, previous studies of a limited number of languages have shown that they tend to limit the distance between words that depend on each other for their meaning. Such “dependency” is key if sentences are to make sense. For example, in the sentence “Jane threw out the trash,” the word “Jane” is dependent on “threw”—it modifies the verb by telling us who was doing the throwing, just as we need “trash” to know what was thrown, and “out” to know where the trash went. Although “threw” and “trash” are three words away from each other, we can still understand the sentence easily. © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21263 - Posted: 08.04.2015

By Ariana Eunjung Cha Think you have your hands full making sure your baby is fed and clean and gets enough sleep? Here's another thing for the list: developing your child's social skills by the way you talk. People used to think that social skills were something kids were born with, not taught. But a growing body of research shows that the environment a child grows up in as an infant and toddler can have a major impact on how they interact with others as they get older. And it turns out that a key factor may be the type of language they hear around them, even at an age when all they can do is babble. Psychologists at the University of York observed 40 mothers and their babies at 10, 12, 16 and 20 months and logged the kind of language mothers used during play. They were especially interested in "mind-related comments," which include inferences about what someone is thinking when a behavior or action happens. Elizabeth Kirk, a lecturer at the university who is the lead author of the study, published in the British Journal of Developmental Psychology on Monday, gave this as an example: If an infant has difficulty opening a door on a toy, the parent might comment that the child appears "frustrated." Then researchers revisited the children when they were 5 or 6 years of age and assessed their socio-cognitive ability. The test involved reading a story and having the children answer comprehension questions that show whether they understood the social concept -- persuasion, joke, misunderstanding, white lies, lies, and so forth -- that was represented.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21239 - Posted: 07.30.2015

by Sarah Zielinski It may not be polite to eavesdrop, but sometimes, listening in on others’ conversations can provide valuable information. And in this way, humans are like most other species in the animal world, where eavesdropping is a common way of gathering information about potential dangers. Because alarm calls can vary from species to species, scientists have assumed that eavesdropping on these calls of “danger!” requires some kind of learning. Evidence of that learning has been scant, though. The only study to look at this topic tested five golden-mantled ground squirrels and found that the animals may have learned to recognize previously unknown alarm calls. But the experiment couldn’t rule out other explanations for the squirrels’ behavior, such as that the animals had simply become more wary in general. So Robert Magrath and colleagues at Australian National University in Canberra turned to small Australian birds called superb fairy-wrens. In the wild, these birds will flee to safety when they hear unfamiliar sounds that sound like their own alarm calls, but not when they hear alarm calls that sound different from their own. There’s an exception, though: They’ll take to cover in response to the alarm calls of other species that are common where they live. That suggests the birds learn to recognize those calls. In the lab, the team played the alarm call from a thornbill or a synthetic alarm call for 10 fairy-wrens. The birds didn’t respond to the noise. Then the birds went through two days of training in which the alarm call was played as a mock predator glided overhead. Another group of birds heard the calls but there was no pretend predator. © Society for Science & the Public 2000 - 2015

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21180 - Posted: 07.18.2015

By Lauran Neergaard, New research suggests it may be possible to predict which preschoolers will struggle to read — and it has to do with how the brain deciphers speech when it's noisy. Scientists are looking for ways to tell, as young as possible, when children are at risk for later learning difficulties so they can get early interventions. There are some simple pre-reading assessments for preschoolers. But Northwestern University researchers went further and analyzed brain waves of children as young as three. How well youngsters' brains recognize specific sounds — consonants — amid background noise can help identify who is more likely to have trouble with reading development, the team reported Tuesday in the journal PLOS Biology. If the approach pans out, it may provide "a biological looking glass," said study senior author Nina Kraus, director of Northwestern's Auditory Neuroscience Laboratory. "If you know you have a three-year-old at risk, you can as soon as possible begin to enrich their life in sound so that you don't lose those crucial early developmental years." Connecting sound to meaning is a key foundation for reading. For example, preschoolers who can match sounds to letters earlier go on to read more easily. Auditory processing is part of that pre-reading development: If your brain is slower to distinguish a "D" from a "B" sound, for example, then recognizing words and piecing together sentences could be affected, too. What does noise have to do with it? It stresses the system, as the brain has to tune out competing sounds to selectively focus, in just fractions of milliseconds. And consonants are more vulnerable to noise than vowels, which tend to be louder and longer, Kraus explained. ©2015 CBC/Radio-Canada

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21173 - Posted: 07.15.2015

Henry Nicholls Andy Russell had entered the lecture hall late and stood at the back, listening to the close of a talk by Marta Manser, an evolutionary biologist at the University of Zurich who works on animal communication. Manser was explaining some basic concepts in linguistics to her audience, how humans use meaningless sounds or “phonemes” to generate a vast dictionary of meaningful words. In English, for instance, just 40 different phonemes can be resampled into a rich vocabulary of some 200,000 words. But, explained Manser, this linguistic trick of reorganising the meaningless to create new meaning had not been demonstrated in any non-human animal. This was back in 2012. Russell’s “Holy shit, man” excitement was because he was pretty sure he had evidence for phoneme structuring in the chestnut-crowned babbler, a bird he’s been studying in the semi-arid deserts of south-east Australia for almost a decade. After the talk, Russell (a behavioural ecologist at the University of Exeter) travelled to Zurich to present his evidence to Manser’s colleague Simon Townsend, whose research explores the links between animal communication systems and human language. The fruits of their collaboration are published today in PLoS Biology. One of Russell’s students Jodie Crane had been recording the calls of the chestnut-crowned babbler for her PhD. The PLoS Biology paper focuses on two of these calls, which appear to be made up of two identical elements, just arranged in a different way. © 2015 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21110 - Posted: 06.30.2015

Emma Bowman In a small, sparse makeshift lab, Melissa Malzkuhn practices her range of motion in a black, full-body unitard dotted with light-reflecting nodes. She's strapped on a motion capture, or mocap, suit. Infrared cameras that line the room will capture her movement and translate it into a 3-D character, or avatar, on a computer. But she's not making a Disney animated film. Three-dimensional motion capture has developed quickly in the last few years, most notably as a Hollywood production tool for computer animation in films like Planet of the Apes and Avatar. Behind the scenes though, leaders in the deaf community are taking on the technology to create and improve bilingual learning tools in American Sign Language. Malzkuhn has suited up to record a simple nursery rhyme. Being deaf herself, she spoke with NPR through an interpreter. "I know in English there's just a wealth of nursery rhymes available, but we really don't see as much in ASL," she says. "So we're gonna be doing some original work here in developing nursery rhymes." That's because sound-based rhymes don't cross over well into the visual language of ASL. Malzkuhn heads the Motion Light Lab, or ML2. It's the newest hub of the National Science Foundation Science of Learning Center, Visual Language and Visual Learning (VL2) at Gallaudet University, the premier school for deaf and hard of hearing students. © 2015 NPR

Related chapters from BP7e: Chapter 1: Biological Psychology: Scope and Outlook; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 1: An Introduction to Brain and Behavior; Chapter 15: Language and Our Divided Brain
Link ID: 21107 - Posted: 06.29.2015

By Sarah C. P. Williams Parrots, like the one in the video above, are masters of mimicry, able to repeat hundreds of unique sounds, including human phrases, with uncanny accuracy. Now, scientists say they have pinpointed the neurons that turn these birds into copycats. The discovery could not only illuminate the origins of bird-speak, but might shed light on how new areas of the brain arise during evolution. Parrots, songbirds, and hummingbirds—which can all chirp different dialects, pick up new songs, and mimic sound—all have a “song nuclei” in their brain: a group of interconnected neurons that synchronizes singing and learning. But the exact boundaries of that region are fuzzy; some researchers define it as larger or smaller than others do, depending on what criteria they use to outline the area. And differences between the song nuclei of parrots—which can better imitate complex sounds—and other birds are hard to pinpoint. Neurobiologist Erich Jarvis of Duke University in Durham, North Carolina, was studying the activation of PVALB—a gene that had been previously found in songbirds—within the brains of parrots when he noticed something strange. Stained sections of deceased parrot brains revealed that the gene was turned on at distinct levels within two distinct areas of what he thought was the song nuclei of the birds’ brains. Sometimes, the gene was activated in a spherical central core of the nuclei. But other times, it was only active in an outer shell of cells surrounding that core. When he and collaborators looked more closely, they found that the inner core and the outer shell—like the chocolate and surrounding candy shell of an M&M—varied in many more ways as well.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21091 - Posted: 06.25.2015

by Meghan Rosen When we brought Baby S home from the hospital six months ago, his big sister, B, was instantly smitten. She leaned her curly head over his car seat, tickled his toes and cooed like a pro — in a voice squeakier than Mickey Mouse’s. B’s voice — already a happy toddler squeal — sounded as if she'd sucked in some helium. My husband and I wondered about her higher pitch. Are humans hardwired to chitchat squeakily to babies, or did B pick up vocal cues from us? (I don’t sound like that, do I?) If I’m like other mothers, I probably do. American English-speaking moms dial up their pitch drastically when talking to their children. But dads’ voices tend to stay steady, researchers reported May 19 in Pittsburgh at the 169th Meeting of the Acoustical Society of America. “Dads talk to kids like they talk to adults,” says study coauthor Mark VanDam, a speech scientist at Washington State University. But that doesn’t mean fathers are doing anything wrong, he says. Rather, they may be doing something right: offering their kids a kind of conversational bridge to the outside world. Scientists have studied infant- or child-directed speech (often called “motherese” or “parentese”) for decades. In American English, this type of babytalk typically uses high pitch, short utterances, repetition, loud volume and slowed-down speech. Mothers who speak German Japanese, French, and other languages also tweak their pitch and pace when talking to children. But no one had really studied dads, VanDam says. © Society for Science & the Public 2000 - 2015.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21050 - Posted: 06.15.2015

By Jason G. Goldman In 1970 child welfare authorities in Los Angeles discovered that a 14-year-old girl referred to as “Genie” had been living in nearly total social isolation from birth. An unfortunate participant in an unintended experiment, Genie proved interesting to psychologists and linguists, who wondered whether she could still acquire language despite her lack of exposure to it. Genie did help researchers better define the critical period for learning speech—she quickly acquired a vocabulary but did not gain proficiency with grammar—but thankfully, that kind of case study comes along rarely. So scientists have turned to surrogates for isolation experiments. The approach is used extensively with parrots, songbirds and hummingbirds, which, like us, learn how to verbally communicate over time; those abilities are not innate. Studying most vocal-learning mammals—for example, elephants, whales, sea lions—is not practical, so Tel Aviv University zoologists Yosef Prat, Mor Taub and Yossi Yovel turned to the Egyptian fruit bat, a vocal-learning species that babbles before mastering communication, as a child does. The results of their study, the first to raise bats in a vocal vacuum, were published this spring in the journal Science Advances. Five bat pups were reared by their respective mothers in isolation, so the pups heard no adult conversations. After weaning, the juveniles were grouped together and exposed to adult bat chatter through a speaker. A second group of five bats was raised in a colony, hearing their species' vocal interactions from birth. Whereas the group-raised bats eventually swapped early babbling for adult communication, the isolated bats stuck with their immature vocalizations well into adolescence. © 2015 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20969 - Posted: 05.23.2015

by Bas den Hond Watch your language. Words mean different things to different people – so the brainwaves they provoke could be a way to identify you. Blair Armstrong of the Basque Center on Cognition, Brain, and Language in Spain and his team recorded the brain signals of 45 volunteers as they read a list of 75 acronyms – such as FBI or DVD – then used computer programs to spot differences between individuals. The participants' responses varied enough that the programs could identify the volunteers with about 94 per cent accuracy when the experiment was repeated. The results hint that such brainwaves could be a way for security systems to verify individuals' identity. While the 94 per cent accuracy seen in this experiment would not be secure enough to guard, for example, a room or computer full of secrets, Armstrong says it's a promising start. Techniques for identifying people based on the electrical signals in their brain have been developed before. A desirable advantage of such techniques is that they could be used to verify someone's identity continuously, whereas passwords or fingerprints only provide a tool for one-off identification. Continuous verification – by face or ear recognition, or perhaps by monitoring brain activity – could in theory allow someone to interact with many computer systems simultaneously, or even with a variety of intelligent objects, without having to repeatedly enter passwords for each device. © Copyright Reed Business Information Ltd

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20959 - Posted: 05.20.2015

By Virginia Morell Like humans, dolphins, and a few other animals, North Atlantic right whales (Eubalaena glacialis) have distinctive voices. The usually docile cetaceans utter about half a dozen different calls, but the way in which each one does so is unique. To find out just how unique, researchers from Syracuse University in New York analyzed the “upcalls” of 13 whales whose vocalizations had been collected from suction cup sensors attached to their backs. An upcall is a contact vocalization that lasts about 1 to 2 seconds and rises in frequency, sounding somewhat like a deep-throated cow’s moo. Researchers think the whales use the calls to announce themselves and to “touch base” with others of their kind, they explained in a poster presented today at the Meeting of the Acoustical Society of America in Pittsburgh, Pennsylvania. After analyzing the duration and harmonic frequency of these upcalls, as well as the rate at which the frequencies changed, the scientists found that they could distinguish the voices of each of the 13 whales. They think their discovery will provide a new tool for tracking and monitoring the critically endangered whales, which number about 450 and range primarily from Florida to Newfoundland. © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 20941 - Posted: 05.18.2015

by Jessica Hamzelou GOO, bah, waahhhh! Crying is an obvious sign something is up with your little darling but beyond that, their feelings are tricky to interpret – except at playtime. Trying to decipher the meaning behind the various cries, squeaks and babbles a baby utters will have consumed many a parent. Some researchers reckon babies are simply practising to learn to speak, while others think these noises have some underlying meaning. "Babies probably aren't aware of wanting to tell us something," says Jitka Lindová, an evolutionary psychologist at Charles University in Prague, Czech Republic. Instead, she says, infants are conveying their emotions. But can adults pick up on what those emotions are? Lindová and her colleagues put 333 adults to the test. First they made 20-second recordings of five- to 10-month-old babies while they were experiencing a range of emotions. For example, noises that meant a baby was experiencing pain were recorded while they received their standard vaccinations. The team also collected recordings when infants were hungry, separated from a parent, reunited, just fed, and while they were playing. The volunteers had to listen to a selection of the recordings then guess which situation each related to. The adults could almost always tell whether a baby was distressed in some way. This makes sense – a baby's survival may depend on an adult being able to tell whether a baby is unwell, in pain or in danger. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 20878 - Posted: 05.04.2015

// by Jennifer Viegas Male species of a West African monkey communicate using at least these six main sounds: boom-boom, krak, krak-oo, hok, hok-oo and wak-oo. Key to the communication by the male Campbell's monkey is the suffix "oo," according to a new study, which is published in the latest issue of the Proceedings of the Royal Society B. By adding that sound to the end of their calls, the male monkeys have created a surprisingly rich "vocabulary" that males and females of their own kind, as well as a related species of monkey, understand. The study confirms prior suspected translations of the calls. For example, "krak" means leopard, while "krak-oo" refers to other non-leopard threats, such as falling branches. "Boom-boom-krak-oo" can roughly translate to, "Watch out for that falling tree branch." "Several aspects of communication in Campbell's monkeys allow us to draw parallels with human language," lead author Camille Coye, a researcher at the University of St. Andrews, told Discovery News. For the study, she and her team broadcast actual and artificially modified male Campbell's monkey calls to 42 male and female members of a related species: Diana monkeys. The latter's vocal responses showed that they understood the calls and replied in predicted ways. They freaked out after hearing "krak," for example, and remained on alert as they do after seeing a leopard. © 2015 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20858 - Posted: 04.29.2015

By Sid Perkins Imagine having a different accent from someone else simply because your house was farther up the same hill. For at least one species of songbird, that appears to be the case. Researchers have found that the mating songs of male mountain chickadees (Poecile gambeli, shown) differ in their duration, loudness, and the frequency ranges of individual chirps, depending in part on the elevation of their habitat in the Sierra Nevada mountains of the western United States. The songs also differed from those at similar elevations on a nearby peak. Young males of this species learn their breeding songs by listening to adult males during their first year of life, the researchers note. And because these birds don’t migrate as the seasons change, and young birds don’t settle far from where they grew up, it’s likely that the differences persist in each local group—the ornithological equivalent of having Southern drawls and Boston accents. Females may use the differences in dialect to distinguish local males from outsiders that may not be as well adapted to the neighborhood they’re trying to invade, the team reports today in Royal Society Open Science. © 2015 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20857 - Posted: 04.29.2015

By Virginia Morell Baby common marmosets, small primates found in the forests of northeastern Brazil, must learn to take turns when calling, just as human infants learn not to interrupt. Even though the marmosets (Callithrix jacchus) don’t have language, they do exchange calls. And the discovery that a young marmoset (as in the photo above) learns to wait for another marmoset to finish its call before uttering its own sound may help us better understand the origins of human language, say scientists online today in the Proceedings of the Royal Society B. No primate, other than humans, is a vocal learner, with the ability to hear a sound and imitate it—a talent considered essential to speech. But the marmoset researchers say that primates still exchange calls in a manner reminiscent of having a conversation because they wait for another to finish calling before vocalizing—and that this ability is often overlooked in discussions about the evolution of language. If this skill is learned, it would be even more similar to that of humans, because human babies learn to do this while babbling with their mothers. In a lab, the researchers recorded the calls of a marmoset youngster from age 4 months to 12 months and those of its mother or father while they were separated by a dark curtain. In adult exchanges, a marmoset makes a high-pitched contact call (listen to a recording here), and its fellow responds within 10 seconds. The study showed that the youngster’s responses varied depending on who was calling to them. They were less likely to interrupt their mothers, but not their dads—and both mothers and fathers would give the kids the “silent treatment” if they were interrupted. Thus, the youngster learns the first rule of polite conversation: Don’t interrupt! © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20828 - Posted: 04.22.2015