Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 513

// by Jennifer Viegas Researchers eavesdropping on wild chimpanzees determined that the primates communicate about at least two things: their favorite yummy fruits, and the trees where these fruits can be found. Of particular interest to the chimps is the size of trees bearing the fruits that they relish most, such that the chimps yell out that information, according to a new study published in the journal Animal Behaviour. The study is the first to find that information about tree size and available fruit amounts are included in chimp calls, in addition to assessments about food quality. "Chimpanzees definitely have a very complex communication system that includes a variety of vocalizations, but also facial expressions and gestures," project leader Ammie Kalan of the Max Planck Institute for Evolutionary Anthropology told Discovery News. "How much it resembles human language is still a matter of debate," she added, "but at the very least, research shows that chimpanzees use vocalizations in a sophisticated manner, taking into account their social and environmental surroundings." Kalan and colleagues Roger Mundry and Christophe Boesch spent over 750 hours observing chimps and analyzing their food calls in the Ivory Coast's Taï Forest. The Wild Chimpanzee Foundation in West Africa is working hard to try and protect this population of chimps, which is one of the last wild populations of our primate cousins. © 2015 Discovery Communications, LLC

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20497 - Posted: 01.20.2015

By Michael Balter If there’s one thing that distinguishes humans from other animals, it’s our ability to use language. But when and why did this trait evolve? A new study concludes that the art of conversation may have arisen early in human evolution, because it made it easier for our ancestors to teach each other how to make stone tools—a skill that was crucial for the spectacular success of our lineage. Researchers have long debated when humans starting talking to each other. Estimates range wildly, from as late as 50,000 years ago to as early as the beginning of the human genus more than 2 million years ago. But words leave no traces in the archaeological record. So researchers have used proxy indicators for symbolic abilities, such as early art or sophisticated toolmaking skills. Yet these indirect approaches have failed to resolve arguments about language origins. Now, a team led by Thomas Morgan, a psychologist at the University of California, Berkeley, has attacked the problem in a very different way. Rather than considering toolmaking as a proxy for language ability, he and his colleagues explored the way that language may helps modern humans learn to make such tools. The researchers recruited 184 students from the University of St. Andrews in the United Kingdom, where some members of the team were based, and organized them into five groups. The first person in each group was taught by archaeologists how to make artifacts called Oldowan tools, which include fairly simple stone flakes that were manufactured by early humans beginning about 2.5 million years ago. This technology, named after the famous Olduvai Gorge in Tanzania where archaeologists Louis and Mary Leakey discovered the implements in the 1930s, consists of hitting a stone “core” with a stone “hammer” in such a way that a flake sharp enough to butcher an animal is struck off. Producing a useful flake requires hitting the core at just the right place and angle. © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20482 - Posted: 01.14.2015

By DOUGLAS QUENQUA A sparrow’s song may sound simple, consisting of little more than whistles and trills. But to the sparrows, those few noises can take on vastly different meanings depending on small variations in context and repetition, researchers have found. In humans, the ability to extract nearly endless meanings from a finite number of sounds, known as partial phonemic overlapping, was key to the development of language. To see whether sparrows shared this ability, researchers at Duke University recorded and analyzed the songs of more than 200 Pennsylvania swamp sparrows. They found that the sparrows’ whistles could be divided into three lengths: short, intermediate and long. The researchers then played the sparrows two versions of the songs — the original and a slightly altered one. They found that replacing a single short whistle with an intermediate one, for example, could significantly alter a bird’s reaction, but only if it came at the right moment in the song. “Identical sounds seemed to belong to a different category depending on the context,” said Robert F. Lachlan, a biologist now with Queen Mary University of London and the lead author of the study. The findings, which were published in Proceedings of the National Academy of Sciences, are part of a larger effort to better understand how human language evolved. If even birds rely on phonemic overlapping to communicate, Dr. Lachlan said, it could indicate that such features “developed independently of higher aspects of language.” © 2015 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 20471 - Posted: 01.13.2015

by Lisa Seachrist Chiu Just before winter break, my fifth grader came home from school, opened her mouth and produced what sounded to me like a stuttering mess of gibberish. After complaining that when she spends the entire day immersed in Chinese, she sometimes can’t figure out what language to use, she carried on speaking flawless English to me and Chinese to a friend while they did their homework. Quite honestly, I had been eagerly anticipating this very day for a long time. Having worked several years to establish the Chinese language immersion elementary school my daughter attends, I could barely contain my excitement at this demonstration that she truly grasps a second language. Early language programs are hot, in no small part because, when it comes to language, kids under the age of 7 are geniuses. Like many parents, I wanted my child to be fluent in as many languages as possible so she can communicate with more people and because it gives her a prime tool to explore different cultures. Turns out, it may also benefit her brain. With the help of advanced imaging tools that reveal neural processes in specific brain structures, researchers are coalescing around the idea that fluency in more than one language heightens executive function — the ability to regulate and control cognitive processes. It’s a radical shift from just a few decades ago when psychologists routinely warned against raising children who speak two languages, lest they become confused and suffer delays in learning. © Society for Science & the Public 2000 - 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20448 - Posted: 01.01.2015

|By Joshua A. Krisch There is a mystery on Tiwai Island. A large wildlife sanctuary in Sierra Leone, the island is home to pygmy hippopotamuses, hundreds of bird species and several species of primates, including Campbell’s monkeys. These monkeys communicate via an advanced language that primatologists and linguists have been studying for decades. Over time, experts nearly cracked the code behind monkey vocabulary. And then came krak. In the Ivory Coast’s Tai Forest Campbell’s monkeys (Cercopithecus campbelli) use the term krak to indicate that a leopard is nearby and the term hok to warn of an eagle circling overheard. Primatologists indexed their monkey lexicon accordingly. But on Tiwai Island they found that those same monkeys used krak as a general alarm call—one that, occasionally, even referred to eagles. “Why on Earth were they producing krak when they heard an eagle,” asks co-author Philippe Schlenker, a linguist at France’s National Center for Scientific Research and professor at New York University. “For some reason krak, which is a leopard in the Tai Forest, seems to be recycled as a general alarm call on Tiwai Island.” In a paper published in the November 28 Linguistics and Philosophy Schlenker and his team applied logic and human linguistics to crack the krak code. Their findings imply that some monkey dialects can be just as sophisticated as human language. In 2009 a team of scientists travelled to Tai Forest with one mission—to terrify Campbell’s monkeys. Prior studies had collected monkey calls and then parsed vague meanings based on events that were already happening on the forest floor. But these primatologists set up realistic model leopards and played recordings of eagle screeches over loudspeakers. Their field experiments resulted in some of the best data available about how monkeys verbally respond to predators. © 2014 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20432 - Posted: 12.20.2014

|By Marissa Fessenden Songbirds stutter, babble when young, become mute if parts of their brains are damaged, learn how to sing from their elders and can even be "bilingual"—in other words, songbirds' vocalizations share a lot of traits with human speech. However, that similarity goes beyond behavior, researchers have found. Even though humans and birds are separated by millions of years of evolution, the genes that give us our ability to learn speech have much in common with those that lend birds their warble. A four-year long effort involving more than 100 researchers around the world put the power of nine supercomputers into analyzing the genomes of 48 species of birds. The results, published this week in a package of eight articles in Science and 20 papers in other journals, provides the most complete picture of the bird family tree thus far. The project has also uncovered genetic signatures in song-learning bird brains that have surprising similarities to the genetics of speech in humans, a finding that could help scientists study human speech. The analysis suggests that most modern birds arose in an impressive speciation event, a "big bang" of avian diversification, in the 10 million years immediately following the extinction of dinosaurs. This period is more recent than posited in previous genetic analyses, but it lines up with the fossil record. By delving deeper into the rich data set, research groups identified when birds lost their teeth, investigated the relatively slow evolution of crocodiles and outlined the similarities between birds' and humans' vocal learning ability, among other findings. © 2014 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20423 - Posted: 12.16.2014

by Colin Barras It's not just great minds that think alike. Dozens of the genes involved in the vocal learning that underpins human speech are also active in some songbirds. And knowing this suggests that birds could become a standard model for investigating the genetics of speech production – and speech disorders. Complex language is a uniquely human trait, but vocal learning – the ability to pick up new sounds by imitating others – is not. Some mammals, including whales, dolphins and elephants, share our ability to learn new vocalisations. So do three groups of birds: the songbirds, parrots and hummingbirds. The similarities between vocal learning in humans and birds are not just superficial. We know, for instance, that songbirds have specialised vocal learning brain circuits that are similar to those that mediate human speech. What's more, a decade ago we learned that FOXP2, a gene known to be involved in human language, is also active in "area X" of the songbird brain – one of the brain regions involved in those specialised vocal learning circuits. Andreas Pfenning at the Massachusetts Institute of Technology and his colleagues have now built on these discoveries. They compared maps of genetic activity – transcriptomes – in brain tissue taken from the zebra finch, budgerigar and Anna's hummingbird, representing the three groups of vocal-learning birds. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20414 - Posted: 12.13.2014

By JOHN McWHORTER “TELL me, why should we care?” he asks. It’s a question I can expect whenever I do a lecture about the looming extinction of most of the world’s 6,000 languages, a great many of which are spoken by small groups of indigenous people. For some reason the question is almost always posed by a man seated in a row somewhere near the back. Asked to elaborate, he says that if indigenous people want to give up their ancestral language to join the modern world, why should we consider it a tragedy? Languages have always died as time has passed. What’s so special about a language? The answer I’m supposed to give is that each language, in the way it applies words to things and in the way its grammar works, is a unique window on the world. In Russian there’s no word just for blue; you have to specify whether you mean dark or light blue. In Chinese, you don’t say next week and last week but the week below and the week above. If a language dies, a fascinating way of thinking dies along with it. I used to say something like that, but lately I have changed my answer. Certainly, experiments do show that a language can have a fascinating effect on how its speakers think. Russian speakers are on average 124 milliseconds faster than English speakers at identifying when dark blue shades into light blue. A French person is a tad more likely than an Anglophone to imagine a table as having a high voice if it were a cartoon character, because the word is marked as feminine in his language. This is cool stuff. But the question is whether such infinitesimal differences, perceptible only in a laboratory, qualify as worldviews — cultural standpoints or ways of thinking that we consider important. I think the answer is no. Furthermore, extrapolating cognitive implications from language differences is a delicate business. In Mandarin Chinese, for example, you can express If you had seen my sister, you’d have known she was pregnant with the same sentence you would use to express the more basic If you see my sister, you know she’s pregnant. One psychologist argued some decades ago that this meant that Chinese makes a person less sensitive to such distinctions, which, let’s face it, is discomfitingly close to saying Chinese people aren’t as quick on the uptake as the rest of us. The truth is more mundane: Hypotheticality and counterfactuality are established more by context in Chinese than in English. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20401 - Posted: 12.08.2014

| By Carolyn Gregoire When reading about Harry Potter's adventures fighting Lord Voldemort or flying around the Quidditch field on his broomstick, we can become so absorbed in the story that the characters and events start to feel real. And according to neuroscientists, there's a good reason for this. Researchers in the Machine Learning Department at Carnegie Mellon University scanned the brains of Harry Potter readers, and found that reading about Harry's adventures activates the same brain regions used to perceive people's intentions and actions in the real world. The researchers performed fMRI scans on a group of eight study participants while they read chapter nine of Harry Potter and the Sorcerer's Stone, which describes Harry's first flying lesson. Then, they analyzed the scans, one cubic millimeter at a time, for four-word segments of the chapter in order to build the first integrated computational model of reading. The researchers created a technique such that for each two-second fMRI scan, the readers would see four words. And for each word, the researchers identified 195 detailed features that the brain would process. Then, an algorithm was applied to analyze the activation of each millimeter of the brain for each two-second scan, associating various word features with different regions of the brain. Using the model, the researchers were able to predict which of two passages the subjects were reading with a 74 percent accuracy rate. ©2014 TheHuffingtonPost.com, Inc

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20386 - Posted: 12.03.2014

By Gabe Bergado It's not news that reading has countless benefits: Poetry stimulates parts of the brain linked to memory and sparks self-reflection; kids who read the Harry Potter books tend to be better people. But what about people who only read newspapers? Or people who scan Twitter all day? Are those readers' brains different from literary junkies who peruse the pages of 19th century fictional classics? Short answer: Yes — reading enhances connectivity in the brain. But readers of fiction? They're a special breed. The study: A 2013 Emory University study looked at the brains of fiction readers. Researchers compared the brains of people after they read to the brains of people who didn't read. The brains of the readers — they read Robert Harris' Pompeii over a nine-day period at night — showed more activity in certain areas than those who didn't read. Specifically, researchers found heightened connectivity in the left temporal cortex, part of the brain typically associated with understanding language. The researchers also found increased connectivity in the central sulcus of the brain, the primary sensory region, which helps the brain visualize movement. When you visualize yourself scoring a touchdown while playing football, you can actually somewhat feel yourself in the action. A similar process happens when you envision yourself as a character in a book: You can take on the emotions they are feeling. It may sound hooey hooey, but it's true: Fiction readers make great friends as they tend to be more aware of others' emotions. Copyright © Mic Network Inc.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 11: Emotions, Aggression, and Stress
Link ID: 20385 - Posted: 12.03.2014

|By Jason G. Goldman A sharp cry pierces the air. Soon a worried mother deer approaches the source of the sound, expecting to find her fawn. But the sound is coming from a speaker system, and the call isn't that of a baby deer at all. It's an infant fur seal's. Because deer and seals do not live in the same habitats, mother deer should not know how baby seal screams sound, reasoned biologists Susan Lingle of the University of Winnipeg and Tobias Riede of Midwestern University, who were running the acoustic experiment. So why did a mother deer react with concern? Over two summers, the researchers treated herds of mule deer and white-tailed deer on a Canadian farm to modified recording of the cries of a wide variety of infant mammals—elands, marmots, bats, fur seals, sea lions, domestic cats, dogs and humans. By observing how mother deer responded, Lingle and Riede discovered that as long as the fundamental frequency was similar to that of their own infants' calls, those mothers approached the speaker as if they were looking for their offspring. Such a reaction suggests deep commonalities among the cries of most young mammals. (The mother deer did not show concern for white noise, birdcalls or coyote barks.) Lingle and Riede published their findings in October in the American Naturalist. Researchers had previously proposed that sounds made by different animals during similar experiences—when they were in pain, for example—would share acoustic traits. “As humans, we often ‘feel’ for the cry of young animals,” Lingle says. That empathy may arise because emotions are expressed in vocally similar ways among mammals. © 2014 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20368 - Posted: 11.29.2014

by Aviva Rutkin THERE is only one real rule to conversing with a baby: talking is better than not talking. But that one rule can make a lifetime of difference. That's the message that the US state of Georgia hopes to send with Talk With Me Baby, a public health programme devoted to the art of baby talk. Starting in January, nurses will be trained in the best way to speak to babies to help them learn language, based on what the latest neuroscience says. Then they, along with teachers and nutritionists, will model this good behaviour for the parents they meet. Georgia hopes to expose every child born in 2015 in the Atlanta area to this speaking style; by 2018, the hope is to reach all 130,000 or so newborns across the state. Talk With Me Baby is the latest and largest attempt to provide "language nutrition" to infants in the US – a rich quantity and variety of words supplied at a critical time in the brain's development. Similar initiatives have popped up in Providence, Rhode Island, where children have been wearing high-tech vests that track every word they hear, and Hollywood, where the Clinton Foundation has encouraged television shows like Parenthood and Orange is the New Black to feature scenes demonstrating good baby talk. "The idea is that language is as important to the brain as food is to physical growth," says Arianne Weldon, director of Get Georgia Reading, one of several partner organisations involved in Talk With Me Baby. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20367 - Posted: 11.29.2014

Mo Costandi A team of neuroscientists in America say they have rediscovered an important neural pathway that was first described in the late nineteenth century but then mysteriously disappeared from the scientific literature until very recently. In a study published today in Proceedings of the National Academy of Sciences, they confirm that the prominent white matter tract is present in the human brain, and argue that it plays an important and unique role in the processing of visual information. The vertical occipital fasciculus (VOF) is a large flat bundle of nerve fibres that forms long-range connections between sub-regions of the visual system at the back of the brain. It was originally discovered by the German neurologist Carl Wernicke, who had by then published his classic studies of stroke patients with language deficits, and was studying neuroanatomy in Theodor Maynert’s laboratory at the University of Vienna. Wernicke saw the VOF in slices of monkey brain, and included it in his 1881 brain atlas, naming it the senkrechte occipitalbündel, or ‘vertical occipital bundle’. Maynert - himself a pioneering neuroanatomist and psychiatrist, whose other students included Sigmund Freud and Sergei Korsakov - refused to accept Wernicke’s discovery, however. He had already described the brain’s white matter tracts, and had arrived at the general principle that they are oriented horizontally, running mostly from front to back within each hemisphere. But the pathway Wernicke had described ran vertically. Another of Maynert’s students, Heinrich Obersteiner, identified the VOF in the human brain, and mentioned it in his 1888 textbook, calling it the senkrechte occipitalbündel in one illustration, and the fasciculus occipitalis perpendicularis in another. So, too, did Heinrich Sachs, a student of Wernicke’s, who labeled it the stratum profundum convexitatis in his 1892 white matter atlas. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20333 - Posted: 11.20.2014

By David Shultz WASHINGTON, D.C.—Reciting the days of the week is a trivial task for most of us, but then, most of us don’t have cooling probes in our brains. Scientists have discovered that by applying a small electrical cooling device to the brain during surgery they could slow down and distort speech patterns in patients. When the probe was activated in some regions of the brain associated with language and talking—like the premotor cortex—the patients’ speech became garbled and distorted, the team reported here yesterday at the Society for Neuroscience’s annual meeting. As scientists moved the probe to other speech regions, such as the pars opercularis, the distortion lessened, but speech patterns slowed. (These zones and their effects are displayed graphically above.) “What emerged was this orderly map,” says team leader Michael Long, a neuroscientist at the New York University School of Medicine in New York City. The results suggest that one region of the brain organizes the rhythm and flow of language while another is responsible for the actual articulation of the words. The team was even able to map which word sounds were most likely to be elongated when the cooling probe was applied. “People preferentially stretched out their vowels,” Long says. “Instead of Tttuesssday, you get Tuuuesdaaay.” The technique is similar to the electrical probe stimulation that researchers have been using to identify the function of various brain regions, but the shocks often trigger epileptic seizures in sensitive patients. Long contends that the cooling probe is completely safe, and that in the future it may help neurosurgeons decide where to cut and where not to cut during surgery. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20328 - Posted: 11.20.2014

by Helen Thomson As you read this, your neurons are firing – that brain activity can now be decoded to reveal the silent words in your head TALKING to yourself used to be a strictly private pastime. That's no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world. "If you're reading text in a newspaper or a book, you hear a voice in your own head," says Brian Pasley at the University of California, Berkeley. "We're trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak." When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words. In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain's temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity aloneMovie Camera (PLoS Biology, doi.org/fzv269). © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20267 - Posted: 11.01.2014

By Virginia Morell Human fetuses are clever students, able to distinguish male from female voices and the voices of their mothers from those of strangers between 32 and 39 weeks after conception. Now, researchers have demonstrated that the embryos of the superb fairy-wren (Malurus cyaneus, pictured), an Australian songbird, also learn to discriminate among the calls they hear. The scientists played 1-minute recordings to 43 fairy-wren eggs collected from nests in the wild. The eggs were between days 9 and 13 of a 13- to 14-day incubation period. The sounds included white noise, a contact call of a winter wren, or a female fairy-wren’s incubation call. Those embryos that listened to the fairy-wrens’ incubation calls and the contact calls of the winter wrens lowered their heart rates, a sign that they were learning to discriminate between the calls of a different species and those of their own kind, the researchers report online today in the Proceedings of the Royal Society B. (None showed this response to the white noise.) Thus, even before hatching, these small birds’ brains are engaged in tasks requiring attention, learning, and possibly memory—the first time embryonic learning has been seen outside humans, the scientists say. The behavior is key because fairy-wren embryos must learn a password from their mothers’ incubation calls; otherwise, they’re less successful at soliciting food from their parents after hatching. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20254 - Posted: 10.29.2014

By Virginia Morell Two years ago, scientists showed that dolphins imitate the sounds of whales. Now, it seems, whales have returned the favor. Researchers analyzed the vocal repertoires of 10 captive orcas (Orcinus orca), three of which lived with bottlenose dolphins (Tursiops truncatus) and the rest with their own kind. Of the 1551 vocalizations these seven latter orcas made, more than 95% were the typical pulsed calls of killer whales. In contrast, the three orcas that had only dolphins as pals busily whistled and emitted dolphinlike click trains and terminal buzzes, the scientists report in the October issue of The Journal of the Acoustical Society of America. (Watch a video as bioacoustician and co-author Ann Bowles describes the difference between killer whale and orca whistles.) The findings make orcas one of the few species of animals that, like humans, is capable of vocal learning—a talent considered a key underpinning of language. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20173 - Posted: 10.08.2014

By Meeri Kim From ultrasonic bat chirps to eerie whale songs, the animal kingdom is a noisy place. While some sounds might have meaning — typically something like “I'm a male, aren't I great?” — no other creatures have a true language except for us. Or do they? A new study on animal calls has found that the patterns of barks, whistles, and clicks from seven different species appear to be more complex than previously thought. The researchers used mathematical tests to see how well the sequences of sounds fit to models ranging in complexity. In fact, five species including the killer whale and free-tailed bat had communication behaviors that were definitively more language-like than random. The study was published online Wednesday in the Proceedings of the Royal Society B. “We're still a very, very long way from understanding this transition from animal communication to human language, and it's a huge mystery at the moment,” said study author and zoologist Arik Kershenbaum, who did the work at the National Institute for Mathematical and Biological Synthesis. “These types of mathematical analyses can give us some clues.” While the most complicated mathematical models come closer to our own speech patterns, the simple models — called Markov processes — are more random and have been historically thought to fit animal calls. “A Markov process is where you have a sequence of numbers or letters or notes, and the probability of any particular note depends only on the few notes that have come before,” said Kershenbaum. So the next note could depend on the last two or 10 notes before it, but there is a defined window of history that can be used to predict what happens next. “What makes human language special is that there's no finite limit as to what comes next,” he said.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19987 - Posted: 08.22.2014

By Jane C. Hu Last week, people around the world mourned the death of beloved actor and comedian Robin Williams. According to the Gorilla Foundation in Woodside, California, we were not the only primates mourning. A press release from the foundation announced that Koko the gorilla—the main subject of its research on ape language ability, capable in sign language and a celebrity in her own right—“was quiet and looked very thoughtful” when she heard about Williams’ death, and later became “somber” as the news sank in. Williams, described in the press release as one of Koko’s “closest friends,” spent an afternoon with the gorilla in 2001. The foundation released a video showing the two laughing and tickling one another. At one point, Koko lifts up Williams’ shirt to touch his bare chest. In another scene, Koko steals Williams’ glasses and wears them around her trailer. These clips resonated with people. In the days after Williams’ death, the video amassed more than 3 million views. Many viewers were charmed and touched to learn that a gorilla forged a bond with a celebrity in just an afternoon and, 13 years later, not only remembered him and understood the finality of his death, but grieved. The foundation hailed the relationship as a triumph over “interspecies boundaries,” and the story was covered in outlets from BuzzFeed to the New York Post to Slate. The story is a prime example of selective interpretation, a critique that has plagued ape language research since its first experiments. Was Koko really mourning Robin Williams? How much are we projecting ourselves onto her and what are we reading into her behaviors? Animals perceive the emotions of the humans around them, and the anecdotes in the release could easily be evidence that Koko was responding to the sadness she sensed in her human caregivers. © 2014 The Slate Group LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19986 - Posted: 08.22.2014

By Victoria Gill Science reporter, BBC News Very mobile ears help many animals direct their attention to the rustle of a possible predator. But a study in horses suggests they also pay close attention to the direction another's ears are pointing in order to work out what they are thinking. Researchers from the University of Sussex say these swivelling ears have become a useful communication tool. Their findings are published in the journal Current Biology. The research team studies animal behaviour to build up a picture of how communication and social skills evolved. "We're interested in how [they] communicate," said lead researcher Jennifer Wathan. "And being sensitive to what another individual is thinking is a fundamental skill from which other [more complex] skills develop." Ms Wathan and her colleague Prof Karen McComb set up a behavioural experiment where 72 individual horses had to use visual cues from another horse in order to choose where to feed. They led each horse to a point where it had to select one of two buckets. On a wall behind this decision-making spot was a life-sized photograph of a horse's head facing either to left or right. In some of the trials, the horses ears or eyes were covered. Horse images used in a study of horse communication The ears have it: Horses in the test followed the gaze of another horse, and the direction its ears pointed If the ears and eyes of the horse in the picture were visible, the horses being tested would choose the bucket towards which its gaze - and its ears - were directed. If the horse in the picture had either its eyes or its ears covered, the horse being tested would just choose a feed bucket at random. Like many mammals that are hunted by predators, horses can rotate their ears through almost 180 degrees - but Ms Wathan said that in our "human-centric" view of the world, we had overlooked the importance of these very mobile ears in animal communication. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19914 - Posted: 08.05.2014