Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 532

Henry Nicholls Andy Russell had entered the lecture hall late and stood at the back, listening to the close of a talk by Marta Manser, an evolutionary biologist at the University of Zurich who works on animal communication. Manser was explaining some basic concepts in linguistics to her audience, how humans use meaningless sounds or “phonemes” to generate a vast dictionary of meaningful words. In English, for instance, just 40 different phonemes can be resampled into a rich vocabulary of some 200,000 words. But, explained Manser, this linguistic trick of reorganising the meaningless to create new meaning had not been demonstrated in any non-human animal. This was back in 2012. Russell’s “Holy shit, man” excitement was because he was pretty sure he had evidence for phoneme structuring in the chestnut-crowned babbler, a bird he’s been studying in the semi-arid deserts of south-east Australia for almost a decade. After the talk, Russell (a behavioural ecologist at the University of Exeter) travelled to Zurich to present his evidence to Manser’s colleague Simon Townsend, whose research explores the links between animal communication systems and human language. The fruits of their collaboration are published today in PLoS Biology. One of Russell’s students Jodie Crane had been recording the calls of the chestnut-crowned babbler for her PhD. The PLoS Biology paper focuses on two of these calls, which appear to be made up of two identical elements, just arranged in a different way. © 2015 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21110 - Posted: 06.30.2015

Emma Bowman In a small, sparse makeshift lab, Melissa Malzkuhn practices her range of motion in a black, full-body unitard dotted with light-reflecting nodes. She's strapped on a motion capture, or mocap, suit. Infrared cameras that line the room will capture her movement and translate it into a 3-D character, or avatar, on a computer. But she's not making a Disney animated film. Three-dimensional motion capture has developed quickly in the last few years, most notably as a Hollywood production tool for computer animation in films like Planet of the Apes and Avatar. Behind the scenes though, leaders in the deaf community are taking on the technology to create and improve bilingual learning tools in American Sign Language. Malzkuhn has suited up to record a simple nursery rhyme. Being deaf herself, she spoke with NPR through an interpreter. "I know in English there's just a wealth of nursery rhymes available, but we really don't see as much in ASL," she says. "So we're gonna be doing some original work here in developing nursery rhymes." That's because sound-based rhymes don't cross over well into the visual language of ASL. Malzkuhn heads the Motion Light Lab, or ML2. It's the newest hub of the National Science Foundation Science of Learning Center, Visual Language and Visual Learning (VL2) at Gallaudet University, the premier school for deaf and hard of hearing students. © 2015 NPR

Related chapters from BP7e: Chapter 1: Biological Psychology: Scope and Outlook; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 1: An Introduction to Brain and Behavior; Chapter 15: Language and Our Divided Brain
Link ID: 21107 - Posted: 06.29.2015

By Sarah C. P. Williams Parrots, like the one in the video above, are masters of mimicry, able to repeat hundreds of unique sounds, including human phrases, with uncanny accuracy. Now, scientists say they have pinpointed the neurons that turn these birds into copycats. The discovery could not only illuminate the origins of bird-speak, but might shed light on how new areas of the brain arise during evolution. Parrots, songbirds, and hummingbirds—which can all chirp different dialects, pick up new songs, and mimic sound—all have a “song nuclei” in their brain: a group of interconnected neurons that synchronizes singing and learning. But the exact boundaries of that region are fuzzy; some researchers define it as larger or smaller than others do, depending on what criteria they use to outline the area. And differences between the song nuclei of parrots—which can better imitate complex sounds—and other birds are hard to pinpoint. Neurobiologist Erich Jarvis of Duke University in Durham, North Carolina, was studying the activation of PVALB—a gene that had been previously found in songbirds—within the brains of parrots when he noticed something strange. Stained sections of deceased parrot brains revealed that the gene was turned on at distinct levels within two distinct areas of what he thought was the song nuclei of the birds’ brains. Sometimes, the gene was activated in a spherical central core of the nuclei. But other times, it was only active in an outer shell of cells surrounding that core. When he and collaborators looked more closely, they found that the inner core and the outer shell—like the chocolate and surrounding candy shell of an M&M—varied in many more ways as well.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21091 - Posted: 06.25.2015

by Meghan Rosen When we brought Baby S home from the hospital six months ago, his big sister, B, was instantly smitten. She leaned her curly head over his car seat, tickled his toes and cooed like a pro — in a voice squeakier than Mickey Mouse’s. B’s voice — already a happy toddler squeal — sounded as if she'd sucked in some helium. My husband and I wondered about her higher pitch. Are humans hardwired to chitchat squeakily to babies, or did B pick up vocal cues from us? (I don’t sound like that, do I?) If I’m like other mothers, I probably do. American English-speaking moms dial up their pitch drastically when talking to their children. But dads’ voices tend to stay steady, researchers reported May 19 in Pittsburgh at the 169th Meeting of the Acoustical Society of America. “Dads talk to kids like they talk to adults,” says study coauthor Mark VanDam, a speech scientist at Washington State University. But that doesn’t mean fathers are doing anything wrong, he says. Rather, they may be doing something right: offering their kids a kind of conversational bridge to the outside world. Scientists have studied infant- or child-directed speech (often called “motherese” or “parentese”) for decades. In American English, this type of babytalk typically uses high pitch, short utterances, repetition, loud volume and slowed-down speech. Mothers who speak German Japanese, French, and other languages also tweak their pitch and pace when talking to children. But no one had really studied dads, VanDam says. © Society for Science & the Public 2000 - 2015.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21050 - Posted: 06.15.2015

By Jason G. Goldman In 1970 child welfare authorities in Los Angeles discovered that a 14-year-old girl referred to as “Genie” had been living in nearly total social isolation from birth. An unfortunate participant in an unintended experiment, Genie proved interesting to psychologists and linguists, who wondered whether she could still acquire language despite her lack of exposure to it. Genie did help researchers better define the critical period for learning speech—she quickly acquired a vocabulary but did not gain proficiency with grammar—but thankfully, that kind of case study comes along rarely. So scientists have turned to surrogates for isolation experiments. The approach is used extensively with parrots, songbirds and hummingbirds, which, like us, learn how to verbally communicate over time; those abilities are not innate. Studying most vocal-learning mammals—for example, elephants, whales, sea lions—is not practical, so Tel Aviv University zoologists Yosef Prat, Mor Taub and Yossi Yovel turned to the Egyptian fruit bat, a vocal-learning species that babbles before mastering communication, as a child does. The results of their study, the first to raise bats in a vocal vacuum, were published this spring in the journal Science Advances. Five bat pups were reared by their respective mothers in isolation, so the pups heard no adult conversations. After weaning, the juveniles were grouped together and exposed to adult bat chatter through a speaker. A second group of five bats was raised in a colony, hearing their species' vocal interactions from birth. Whereas the group-raised bats eventually swapped early babbling for adult communication, the isolated bats stuck with their immature vocalizations well into adolescence. © 2015 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20969 - Posted: 05.23.2015

by Bas den Hond Watch your language. Words mean different things to different people – so the brainwaves they provoke could be a way to identify you. Blair Armstrong of the Basque Center on Cognition, Brain, and Language in Spain and his team recorded the brain signals of 45 volunteers as they read a list of 75 acronyms – such as FBI or DVD – then used computer programs to spot differences between individuals. The participants' responses varied enough that the programs could identify the volunteers with about 94 per cent accuracy when the experiment was repeated. The results hint that such brainwaves could be a way for security systems to verify individuals' identity. While the 94 per cent accuracy seen in this experiment would not be secure enough to guard, for example, a room or computer full of secrets, Armstrong says it's a promising start. Techniques for identifying people based on the electrical signals in their brain have been developed before. A desirable advantage of such techniques is that they could be used to verify someone's identity continuously, whereas passwords or fingerprints only provide a tool for one-off identification. Continuous verification – by face or ear recognition, or perhaps by monitoring brain activity – could in theory allow someone to interact with many computer systems simultaneously, or even with a variety of intelligent objects, without having to repeatedly enter passwords for each device. © Copyright Reed Business Information Ltd

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20959 - Posted: 05.20.2015

By Virginia Morell Like humans, dolphins, and a few other animals, North Atlantic right whales (Eubalaena glacialis) have distinctive voices. The usually docile cetaceans utter about half a dozen different calls, but the way in which each one does so is unique. To find out just how unique, researchers from Syracuse University in New York analyzed the “upcalls” of 13 whales whose vocalizations had been collected from suction cup sensors attached to their backs. An upcall is a contact vocalization that lasts about 1 to 2 seconds and rises in frequency, sounding somewhat like a deep-throated cow’s moo. Researchers think the whales use the calls to announce themselves and to “touch base” with others of their kind, they explained in a poster presented today at the Meeting of the Acoustical Society of America in Pittsburgh, Pennsylvania. After analyzing the duration and harmonic frequency of these upcalls, as well as the rate at which the frequencies changed, the scientists found that they could distinguish the voices of each of the 13 whales. They think their discovery will provide a new tool for tracking and monitoring the critically endangered whales, which number about 450 and range primarily from Florida to Newfoundland. © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 20941 - Posted: 05.18.2015

by Jessica Hamzelou GOO, bah, waahhhh! Crying is an obvious sign something is up with your little darling but beyond that, their feelings are tricky to interpret – except at playtime. Trying to decipher the meaning behind the various cries, squeaks and babbles a baby utters will have consumed many a parent. Some researchers reckon babies are simply practising to learn to speak, while others think these noises have some underlying meaning. "Babies probably aren't aware of wanting to tell us something," says Jitka Lindová, an evolutionary psychologist at Charles University in Prague, Czech Republic. Instead, she says, infants are conveying their emotions. But can adults pick up on what those emotions are? Lindová and her colleagues put 333 adults to the test. First they made 20-second recordings of five- to 10-month-old babies while they were experiencing a range of emotions. For example, noises that meant a baby was experiencing pain were recorded while they received their standard vaccinations. The team also collected recordings when infants were hungry, separated from a parent, reunited, just fed, and while they were playing. The volunteers had to listen to a selection of the recordings then guess which situation each related to. The adults could almost always tell whether a baby was distressed in some way. This makes sense – a baby's survival may depend on an adult being able to tell whether a baby is unwell, in pain or in danger. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 20878 - Posted: 05.04.2015

// by Jennifer Viegas Male species of a West African monkey communicate using at least these six main sounds: boom-boom, krak, krak-oo, hok, hok-oo and wak-oo. Key to the communication by the male Campbell's monkey is the suffix "oo," according to a new study, which is published in the latest issue of the Proceedings of the Royal Society B. By adding that sound to the end of their calls, the male monkeys have created a surprisingly rich "vocabulary" that males and females of their own kind, as well as a related species of monkey, understand. The study confirms prior suspected translations of the calls. For example, "krak" means leopard, while "krak-oo" refers to other non-leopard threats, such as falling branches. "Boom-boom-krak-oo" can roughly translate to, "Watch out for that falling tree branch." "Several aspects of communication in Campbell's monkeys allow us to draw parallels with human language," lead author Camille Coye, a researcher at the University of St. Andrews, told Discovery News. For the study, she and her team broadcast actual and artificially modified male Campbell's monkey calls to 42 male and female members of a related species: Diana monkeys. The latter's vocal responses showed that they understood the calls and replied in predicted ways. They freaked out after hearing "krak," for example, and remained on alert as they do after seeing a leopard. © 2015 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20858 - Posted: 04.29.2015

By Sid Perkins Imagine having a different accent from someone else simply because your house was farther up the same hill. For at least one species of songbird, that appears to be the case. Researchers have found that the mating songs of male mountain chickadees (Poecile gambeli, shown) differ in their duration, loudness, and the frequency ranges of individual chirps, depending in part on the elevation of their habitat in the Sierra Nevada mountains of the western United States. The songs also differed from those at similar elevations on a nearby peak. Young males of this species learn their breeding songs by listening to adult males during their first year of life, the researchers note. And because these birds don’t migrate as the seasons change, and young birds don’t settle far from where they grew up, it’s likely that the differences persist in each local group—the ornithological equivalent of having Southern drawls and Boston accents. Females may use the differences in dialect to distinguish local males from outsiders that may not be as well adapted to the neighborhood they’re trying to invade, the team reports today in Royal Society Open Science. © 2015 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20857 - Posted: 04.29.2015

By Virginia Morell Baby common marmosets, small primates found in the forests of northeastern Brazil, must learn to take turns when calling, just as human infants learn not to interrupt. Even though the marmosets (Callithrix jacchus) don’t have language, they do exchange calls. And the discovery that a young marmoset (as in the photo above) learns to wait for another marmoset to finish its call before uttering its own sound may help us better understand the origins of human language, say scientists online today in the Proceedings of the Royal Society B. No primate, other than humans, is a vocal learner, with the ability to hear a sound and imitate it—a talent considered essential to speech. But the marmoset researchers say that primates still exchange calls in a manner reminiscent of having a conversation because they wait for another to finish calling before vocalizing—and that this ability is often overlooked in discussions about the evolution of language. If this skill is learned, it would be even more similar to that of humans, because human babies learn to do this while babbling with their mothers. In a lab, the researchers recorded the calls of a marmoset youngster from age 4 months to 12 months and those of its mother or father while they were separated by a dark curtain. In adult exchanges, a marmoset makes a high-pitched contact call (listen to a recording here), and its fellow responds within 10 seconds. The study showed that the youngster’s responses varied depending on who was calling to them. They were less likely to interrupt their mothers, but not their dads—and both mothers and fathers would give the kids the “silent treatment” if they were interrupted. Thus, the youngster learns the first rule of polite conversation: Don’t interrupt! © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20828 - Posted: 04.22.2015

Jordan Gaines Lewis Hodor hodor hodor. Hodor hodor? Hodor. Hodor-hodor. Hodor! Oh, um, excuse me. Did you catch what I said? Fans of the hit HBO show Game of Thrones, the fifth season of which premieres this Sunday, know what I’m referencing, anyway. Hodor is the brawny, simple-minded stableboy of the Stark family in Winterfell. His defining characteristic, of course, is that he only speaks a single word: “Hodor.” But those who read the A Song of Ice and Fire book series by George R R Martin may know something that the TV fans don’t: his name isn’t actually Hodor. According to his great-grandmother Old Nan, his real name is Walder. “No one knew where ‘Hodor’ had come from,” she says, “but when he started saying it, they started calling him by it. It was the only word he had.” Whether he intended it or not, Martin created a character who is a textbook example of someone with a neurological condition called expressive aphasia. In 1861, French physician Paul Broca was introduced to a man named Louis-Victor Leborgne. While his comprehension and mental functioning remained relatively normal, Leborgne progressively lost the ability to produce meaningful speech over a period of 20 years. Like Hodor, the man was nicknamed Tan because he only spoke a single word: “Tan.”

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20773 - Posted: 04.10.2015

Tom Bawden Scientists have deciphered the secrets of gibbon “speech” – discovering that the apes are sophisticated communicators employing a range of more than 450 different calls to talk to their companions. The research is so significant that it could provide clues on the evolution of human speech and also suggests that other animal species could speak a more precise language than has been previously thought, according to lead author Dr Esther Clarke of Durham University. Her study found that gibbons produce different categories of “hoo” calls – relatively quiet sounds that are distinct from their more melodic “song” calls. These categories of call allow the animals to distinguish when their fellow gibbons are foraging for food, alerting them to distant noises or warning others about the presence of predators. In addition, Dr Clarke found that each category of “hoo” call can be broken down further, allowing gibbons to be even more specific in their communication. A warning about lurking raptor birds, for example, sounds different to one about pythons or clouded leopards – being pitched at a particularly low frequency to ensure it is too deep for the birds of prey to hear. The warning call denoting the presence of tigers and leopards is the same because they belong to the same class of big cats, the research found. © independent.co.uk

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20768 - Posted: 04.08.2015

Alice Park We start to talk before we can read, so hearing words, and getting familiar with their sounds, is obviously a critical part of learning a language. But in order to read, and especially in order to read quickly, our brains have to “see” words as well. At least that’s what Maximilian Riesenhuber, a neuroscientist at Georgetown University Medical Center, and his colleagues found in an intriguing brain-mapping study published in the Journal of Neuroscience. The scientists recruited a small group of college students to learn a set of 150 nonsense words, and they imaged their brains before and after the training. Before they learned the words, their brains registered them as a jumble of symbols. But after they were trained to give them a meaning, the words looked more like familiar words they used every day, like car, cat or apple. The difference in way the brain treated the words involved “seeing” them rather than sounding them out. The closest analogy would be for adults learning a foreign language based on a completely different alphabet system. Students would have to first learn the new alphabet, assigning sounds to each symbol, and in order to read, they would have to sound out each letter to put words together. In a person’s native language, such reading occurs in an entirely different way.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20719 - Posted: 03.25.2015

By Nicholas Weiler Where did the thief go? You might get a more accurate answer if you ask the question in German. How did she get away? Now you might want to switch to English. Speakers of the two languages put different emphasis on actions and their consequences, influencing the way they think about the world, according to a new study. The work also finds that bilinguals may get the best of both worldviews, as their thinking can be more flexible. Cognitive scientists have debated whether your native language shapes how you think since the 1940s. The idea has seen a revival in recent decades, as a growing number of studies suggested that language can prompt speakers to pay attention to certain features of the world. Russian speakers are faster to distinguish shades of blue than English speakers, for example. And Japanese speakers tend to group objects by material rather than shape, whereas Koreans focus on how tightly objects fit together. Still, skeptics argue that such results are laboratory artifacts, or at best reflect cultural differences between speakers that are unrelated to language. In the new study, researchers turned to people who speak multiple languages. By studying bilinguals, “we’re taking that classic debate and turning it on its head,” says psycholinguist Panos Athanasopoulos of Lancaster University in the United Kingdom. Rather than ask whether speakers of different languages have different minds, he says, “we ask, ‘Can two different minds exist within one person?’ ” Athanasopoulos and colleagues were interested in a particular difference in how English and German speakers treat events. © 2015 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 20700 - Posted: 03.19.2015

By Matthew J.X. Malady One hour and seven minutes into the decidedly hit-or-miss 1996 comedy Black Sheep, the wiseass sidekick character played by David Spade finds himself at an unusually pronounced loss for words. While riding in a car driven by Chris Farley’s character, he glances at a fold-up map and realizes he somehow has become unfamiliar with the name for paved driving surfaces. “Robes? Rouges? Rudes?” Nothing seems right. Even when informed by Farley that the word he’s looking for is roads, Spade’s character continues to struggle: “Rowds. Row-ads.” By this point, he’s become transfixed. “That’s a total weird word,” he says, “isn’t it?” Now, it’s perhaps necessary to mention that, in the context of the film, Spade’s character is high off nitrous oxide that has leaked from the car’s engine boosters. But never mind that. Row-ad-type word wig outs similar to the one portrayed in that movie are things that actually happen, in real life, to people with full and total control over their mental capacities. These wordnesias sneak up on us at odd times when we’re writing or reading text. I was in a full-on wordnesiac state. On one of my spelling attempts, I think I even threw a K into the mix. It was bad. Here’s how they work: Every now and again, for no good or apparent reason, you peer at a standard, uncomplicated word in a section of text and, well, go all row-ads on it. If you’re typing, that means inexplicably blanking on how to spell something easy like cake or design. The reading version of wordnesia occurs when a common, correctly spelled word either seems as though it can’t possibly be spelled correctly, or like it’s some bizarre combination of letters you’ve never before seen—a grouping that, in some cases, you can’t even imagine being the proper way to compose the relevant term. © 2014 The Slate Group LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20688 - Posted: 03.14.2015

Ewen Callaway A mysterious group of humans from the east stormed western Europe 4,500 years ago — bringing with them technologies such as the wheel, as well as a language that is the forebear of many modern tongues, suggests one of the largest studies of ancient DNA yet conducted. Vestiges of these eastern émigrés exist in the genomes of nearly all contemporary Europeans, according to the authors, who analysed genome data from nearly 100 ancient Europeans1. The first Homo sapiens to colonize Europe were hunter-gatherers who arrived from Africa, by way of the Middle East, around 45,000 years ago. (Neanderthals and other archaic human species had begun roaming the continent much earlier.) Archaeology and ancient DNA suggest that farmers from the Middle East started streaming in around 8,000 years ago, replacing the hunter-gatherers in some areas and mixing with them in others. But last year, a study of the genomes of ancient and contemporary Europeans found echoes not only of these two waves from the Middle East, but also of an enigmatic third group that they said could be from farther east2 (see 'Ancient European genomes reveal jumbled ancestry'). Ancient genes To further pin down the origins of this ghost lineage, a team led by David Reich, an evolutionary and population geneticist at Harvard Medical School in Boston, Massachusetts, analysed nuclear DNA from the bodies of 69 individuals who lived across Europe between 8,000 and 3,000 years ago. They also examined previously published genome data from another 25 ancient Europeans, including Ötzi, the 5,300-year-old 'ice man' who was discovered on the Italian-Austrian border. © 2015 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20571 - Posted: 02.13.2015

by Andy Coghlan Apple's the word. Chimpanzees can learn to grunt "apple" in two chimp languages – a finding that questions how unique our own language abilities are. Researchers have kept records of vocalisations of a group of adult chimps from the Netherlands before and after the move to Edinburgh zoo. Three years later, recordings show, the Dutch chimps had picked up the pronunciation of their Scottish hosts. The finding challenges the prevailing theory that chimp words for objects are fixed because they result from excited, involuntary outbursts. Humans can easily learn foreign words that refer to a specific object, and it was assumed that chimps and other animals could not, perhaps owing to their different brain structure. This has long been argued to be one of the talents making humans unique. The assumption has been that animals do not have control over the sounds they make, whereas we socially learn the labels for things – which is what separates us from animals, says Katie Slocombe of the University of York, UK. But this may be wrong, it seems. "The important thing we've now shown is that with the food calls, they changed the structure to fit in with their new group members, so the Dutch calls for 'apple' changed to the Edinburgh ones," says Slocombe. "It's the first time call structure has been dissociated from emotional outbursts." © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20560 - Posted: 02.07.2015

By Ling Xin For many, the hardest part of learning to speak Chinese is mastering its complex tonal variations. Now, new research suggests a surprising explanation for how those tones arose: a humid climate. By examining the correlation between humidity and the role of tone in more than 3700 languages, scientists found that tonal languages are remarkably rare in arid regions like Central Europe, whereas languages with complex tone pitches are prevalent in relatively humid regions such as the tropics, subtropical Asia, and Central Africa. Humidity keeps the voice box moist and elastic, allowing it to produce correct and complex tones, the scientists explain online this month in the Proceedings of the National Academy of Sciences. “If the United Kingdom had been a humid jungle, English may also have developed into a tonal language,” they claim. So, next time you go to your Chinese class, don’t forget to wet your whistle! © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20517 - Posted: 01.26.2015

// by Jennifer Viegas Researchers eavesdropping on wild chimpanzees determined that the primates communicate about at least two things: their favorite yummy fruits, and the trees where these fruits can be found. Of particular interest to the chimps is the size of trees bearing the fruits that they relish most, such that the chimps yell out that information, according to a new study published in the journal Animal Behaviour. The study is the first to find that information about tree size and available fruit amounts are included in chimp calls, in addition to assessments about food quality. "Chimpanzees definitely have a very complex communication system that includes a variety of vocalizations, but also facial expressions and gestures," project leader Ammie Kalan of the Max Planck Institute for Evolutionary Anthropology told Discovery News. "How much it resembles human language is still a matter of debate," she added, "but at the very least, research shows that chimpanzees use vocalizations in a sophisticated manner, taking into account their social and environmental surroundings." Kalan and colleagues Roger Mundry and Christophe Boesch spent over 750 hours observing chimps and analyzing their food calls in the Ivory Coast's Taï Forest. The Wild Chimpanzee Foundation in West Africa is working hard to try and protect this population of chimps, which is one of the last wild populations of our primate cousins. © 2015 Discovery Communications, LLC

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20497 - Posted: 01.20.2015