Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 598

Ryan Kellman The modern Planet of the Apes reboot begins with a research chimpanzee being raised in an American home. It's a pretty plausible premise — that exact scenario has played out in the real world many times. On June 26, 1931, for example, Luella and Winthrop Kellogg pulled a baby female chimpanzee away from her mother and brought her to live in their home in Orange Park, Fla. The Kelloggs were comparative psychologists. Their plan was to raise the chimpanzee, Gua, alongside their own infant son, Donald, and see if she picked up human language. According to the book they wrote about the experiment, Luella wasn't initially on board: ... the enthusiasm of one of us met with so much resistance from the other that it appeared likely we could never come to an agreement upon whether or not we should even attempt such an undertaking. But attempt it they did. The Kelloggs performed a slew of tests on Donald and Gua. How good were their reflexes? How many words did they recognize? How did they react to the sound of a gunshot? What sound did each infant's skull make when tapped by a spoon? (Donald's produced "a dull thud" while Gua's made the sound of a "mallet upon a wooden croquet ball.") Chimpanzees develop faster than humans, so Gua outshone Donald when it came to most tasks. She even learned to respond to English phrases like "Don't touch!" and "Get down!" But unlike the apes in the movies, Gua never learned to speak. © 2017 npr

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23871 - Posted: 07.25.2017

By Virginia Morell Baby marmosets learn to make their calls by trying to repeat their parents’ vocalizations, scientists report today in Current Biology. Humans were thought to be the only primate with vocal learning—the ability to hear a sound and repeat it, considered essential for speech. When our infants babble, they make apparently random sounds, which adults respond to with words or other sounds; the more this happens, the faster the baby learns to talk. To find out whether marmosets (Callithrix jacchus, pictured) do something similar, scientists played recordings of parental calls during a daily 30-minute session to three sets of newborn marmoset twins until they were 2 months old (roughly equivalent to a 2-year-old human). Baby marmosets make noisy guttural cries; adults respond with soft “phee” contact calls (listen to their calls below). The baby that consistently heard its parents respond to its cries learned to make the adult “phee” sound much faster than did its twin, the team found. It’s not yet known if this ability is limited to the marmosets; if so, the difference may be due to the highly social lives of these animals, where, like us, multiple relatives help care for babies. © 2017 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23662 - Posted: 05.26.2017

Gary Stix Illiterate women in northern Indian learned how to read and write in Hindi for six months after which they had reached a level comparable to a first-grader. Credit: Max Planck Institute for Human Cognitive and Brain Sciences The brain did not evolve to read. It uses the neural muscle of pre-existing visual and language processing areas to enable us to take in works by Tolstoy and Tom Clancy. Reading, of course, begins in the first years of schooling, a time when these brain regions are still in development. What happens, though, when an adult starts learning after the age of 30? A study published May 24 in Science Advances turned up a few unexpected findings. In the report, a broad-ranging group of researchers—from universities in Germany, India and the Netherlands—taught reading to 21 women, all about 30 years of age from near the city of Lucknow in northern India, comparing them to a placebo group of nine women. The majority of those who learned to read could not recognize a word of Hindi at the beginning of the study. After six months, the group had reached a first-grade proficiency level. When the researchers conducted brain scans—using functional magnetic resonance imaging—they were startled. Areas deep below the wrinkled surface, the cortex, in the brains of the new learners had changed. Their results surprised them because most reading-related brain activity was thought to involve the cortex. The new research may overturn this presumption and may pertain pertain to child learners as well. After being filtered through the eyes, visual information may move first to evolutionarily ancient brain regions before being relayed to the visual and language areas of the cortex typically associated with reading. © 2017 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23661 - Posted: 05.25.2017

By FERRIS JABR Con Slobodchikoff and I approached the mountain meadow slowly, obliquely, softening our footfalls and conversing in whispers. It didn’t make much difference. Once we were within 50 feet of the clearing’s edge, the alarm sounded: short, shrill notes in rapid sequence, like rounds of sonic bullets. We had just trespassed on a prairie-dog colony. A North American analogue to Africa’s meerkat, the prairie dog is trepidation incarnate. It lives in subterranean societies of neighboring burrows, surfacing to forage during the day and rarely venturing more than a few hundred feet from the center of town. The moment it detects a hawk, coyote, human or any other threat, it cries out to alert the cohort and takes appropriate evasive action. A prairie dog’s voice has about as much acoustic appeal as a chew toy. French explorers called the rodents petits chiens because they thought they sounded like incessantly yippy versions of their pets back home. On this searing summer morning, Slobodchikoff had taken us to a tract of well-trodden wilderness on the grounds of the Museum of Northern Arizona in Flagstaff. Distressed squeaks flew from the grass, but the vegetation itself remained still; most of the prairie dogs had retreated underground. We continued along a dirt path bisecting the meadow, startling a prairie dog that was peering out of a burrow to our immediate right. It chirped at us a few times, then stared silently. “Hello,” Slobodchikoff said, stooping a bit. A stout bald man with a scraggly white beard and wine-dark lips, Slobodchikoff speaks with a gentler and more lilting voice than you might expect. “Hi, guy. What do you think? Are we worth calling about? Hmm?” Slobodchikoff, an emeritus professor of biology at Northern Arizona University, has been analyzing the sounds of prairie dogs for more than 30 years. Not long after he started, he learned that prairie dogs had distinct alarm calls for different predators. Around the same time, separate researchers found that a few other species had similar vocabularies of danger. What Slobodchikoff claimed to discover in the following decades, however, was extraordinary: Beyond identifying the type of predator, prairie-dog calls also specified its size, shape, color and speed; the animals could even combine the structural elements of their calls in novel ways to describe something they had never seen before. No scientist had ever put forward such a thorough guide to the native tongue of a wild species or discovered one so intricate. Prairie-dog communication is so complex, Slobodchikoff says — so expressive and rich in information — that it constitutes nothing less than language.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23606 - Posted: 05.12.2017

By Jane C. Hu New evidence suggests that the earliest traces of a language can stay with us into adulthood, even if we no longer speak or understand the language itself. And early exposure also seems to speed the process of relearning it later in life. In the new study, recently published in Royal Society Open Science, Dutch adults were trained to listen for sound contrasts in Korean. Some participants reported no prior exposure to the language; others were born in Korea and adopted by Dutch families before the age of six. All participants said they could not speak Korean, but the adoptees from Korea were better at distinguishing between the contrasts and more accurate in pronouncing Korean sounds. “Language learning can be retained subconsciously, even if conscious memories of the language do not exist,” says Jiyoun Choi, postdoctoral fellow at Hanyang University in Seoul and lead author of the study. And it appears that just a brief period of early exposure benefits learning efforts later; when Choi and her collaborators compared the results of people adopted before they were six months old with results of others adopted after 17 months, there were no differences in their hearing or speaking abilities. “It's exciting that these effects are seen even among adults who were exposed to Korean only up to six months of age—an age before which babbling emerges,” says Janet Werker, a professor of psychology at the University of British Columbia, who was not involved with the research. Remarkably, what we learn before we can even speak stays with us for decades. © 2017 Scientific American,

Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23598 - Posted: 05.10.2017

By Mo Costandi The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality. The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately. The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions. © 2017 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 23558 - Posted: 05.01.2017

By Cormac McCarthy I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.” Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter. A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal. All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems. © 2017 NautilusThink Inc,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 23525 - Posted: 04.22.2017

by Laura Sanders The way babies learn to speak is nothing short of breathtaking. Their brains are learning the differences between sounds, rehearsing mouth movements and mastering vocabulary by putting words into meaningful context. It’s a lot to fit in between naps and diaper changes. A recent study shows just how durable this early language learning is. Dutch-speaking adults who were adopted from South Korea as preverbal babies held on to latent Korean language skills, researchers report online January 18 in Royal Society Open Science. In the first months of their lives, these people had already laid down the foundation for speaking Korean — a foundation that persisted for decades undetected, only revealing itself later in careful laboratory tests. Researchers tested how well people could learn to identify and speak tricky Korean sounds. “For Korean listeners, these sounds are easy to distinguish, but for second-language learners they are very difficult to master,” says study coauthor Mirjam Broersma, a psycholinguist of Radboud University in Nijmegen, Netherlands. For instance, a native Dutch speaker would listen to three distinct Korean sounds and hear only the same “t” sound. Broersma and her colleagues compared the language-absorbing skills of a group of 29 native Dutch speakers to 29 South Korea-born Dutch speakers. Half of the adoptees moved to the Netherlands when they were older than 17 months — ages at which the kids had probably begun talking. The other half were adopted as preverbal babies younger than 6 months. As a group, the South Korea-born adults outperformed the native-born Dutch adults, more easily learning both to recognize and speak the Korean sounds. |© Society for Science & the Public 2000 - 2017

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23455 - Posted: 04.06.2017

By Matt Reynolds Google’s latest take on machine translation could make it easier for people to communicate with those speaking a different language, by translating speech directly into text in a language they understand. Machine translation of speech normally works by first converting it into text, then translating that into text in another language. But any error in speech recognition will lead to an error in transcription and a mistake in the translation. Researchers at Google Brain, the tech giant’s deep learning research arm, have turned to neural networks to cut out the middle step. By skipping transcription, the approach could potentially allow for more accurate and quicker translations. The team trained its system on hundreds of hours of Spanish audio with corresponding English text. In each case, it used several layers of neural networks – computer systems loosely modelled on the human brain – to match sections of the spoken Spanish with the written translation. To do this, it analysed the waveform of the Spanish audio to learn which parts seemed to correspond with which chunks of written English. When it was then asked to translate, each neural layer used this knowledge to manipulate the audio waveform until it was turned into the corresponding section of written English. “It learns to find patterns of correspondence between the waveforms in the source language and the written text,” says Dzmitry Bahdanau at the University of Montreal in Canada, who wasn’t involved with the work. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23450 - Posted: 04.05.2017

By Timothy Revell Who would you get to observe differences in how men, women and children interact? A robot in a fur-lined hat, of course. Experiments using a robotic head, called Furhat, aimed to uncover inequalities in people’s participation when working on a shared activity, and see if a robot could help redress the balance. They revealed that when a woman is paired in conversation with another woman, she speaks more than if paired with a man. And two men paired together speak less than two women. But this only holds for adults. “Surprisingly, we didn’t find this same pattern for boys and girls. Gender didn’t make much difference to how much children speak,” says Gabriel Skantze at the KTH Royal Institute of Technology in Stockholm, Sweden, who is also one of the robot’s creators. Furhat interacted with 540 visitors at the Swedish National Museum of Science and Technology over nine days. Two people at a time would sit at an interactive table with a touchscreen opposite the robot. They were asked to play a game that involved sorting a set of virtual picture cards, such as arranging images of historical inventions in chronological order. The people worked with the robot to try to solve the task. During this time, the robot’s sensors tracked how long each person spoke for. “This turned out to be a really nice opportunity to study the differences between men and women, and adults and children,” says Skantze. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 8: Hormones and Sex
Link ID: 23367 - Posted: 03.17.2017

Jon Hamilton An orangutan named Rocky is helping scientists figure out when early humans might have uttered the first word. Rocky, who is 12 and lives at the Indianapolis Zoo, has shown that he can control his vocal cords much the way people do. He can learn new vocal sounds and even match the pitch of sounds made by a person. "Rocky, and probably other great apes, can do things with their vocal apparatus that, for decades, people have asserted was impossible," says Rob Shumaker, the zoo's director, who has studied orangutans for more than 30 years. Rocky's abilities suggest that our human ancestors could have begun speaking 10 million years ago, about the time humans and great apes diverged, Shumaker says. Until now, many scientists thought that speech required changes in the brain and vocal apparatus that evolved more recently, during the past 2 million years. The vocal abilities of orangutans might have gone undetected had it not been for Rocky, an ape with an unusual past and a rare relationship with people. Rocky was separated from his mother soon after he was born, and spent his early years raised largely by people, and working in show business. "He was certainly the most visible orangutan in entertainment at the time," says Shumaker. "TV commercials, things like that."

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23354 - Posted: 03.14.2017

Bruce Bower The social lives of macaques and baboons play out in what primatologist Julia Fischer calls “a magnificent opera.” When young Barbary macaques reach about 6 months, they fight nightly with their mothers. Young ones want the “maternal embrace” as they snooze; mothers want precious alone time. Getting pushed away and bitten by dear old mom doesn’t deter young macaques. But they’re on their own when a new brother or sister comes along. In Monkeytalk, Fischer describes how the monkey species she studies have evolved their own forms of intelligence and communication. Connections exist between monkey and human minds, but Fischer regards differences among primate species as particularly compelling. She connects lab studies of monkeys and apes to her observations of wild monkeys while mixing in offbeat personal anecdotes of life in the field. Fischer catapulted into a career chasing down monkeys in 1993. While still in college, she monitored captive Barbary macaques. That led to fieldwork among wild macaques in Morocco. In macaque communities, females hold central roles because young males move to other groups to mate. Members of closely related, cooperative female clans gain an edge in competing for status with male newcomers. Still, adult males typically outrank females. Fischer describes how the monkeys strategically alternate between attacking and forging alliances. After forging her own key scientific alliances, Fischer moved on to study baboons in Africa, where she entered the bureaucratic jungle. Obtaining papers for a car in Senegal, for instance, took Fischer several days. She first had to shop for a snazzy outfit to impress male paper-pushers, she says. |© Society for Science & the Public 2000 - 2017.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23315 - Posted: 03.06.2017

By Hanoch Ben-Yami Human intelligence, even in its most basic forms, is expressed in our language, and is also partly dependent on our linguistic capacity. Homer, Darwin and Einstein could obviously not have achieved what they did without language—but neither could a child in kindergarten. And this raises an important question about animal intelligence. Although we don’t expect a chimpanzee to write an epic or a dolphin to develop a scientific theory, it has frequently been asked whether these or other animals are close in intelligence to children in young children. If so, we must wonder whether animals can acquire a language. In the last half century, much effort has been put trying answer that question by teaching animals, primarily apes, a basic language. There have been some limited successes, with animals using signs to obtain things in which they were interested, for instance. But no animal has yet acquired the linguistic capability that children have already in their third year of life. “Why?” This is a question children start asking during by the age of three at the latest. No animal has yet asked anything. “Why?” is a very important question: it shows that those asking it are aware they don’t know something they wish to know. Understanding the why-question is also necessary for the ability to justify our actions and thoughts. The fact that animals don’t ask “why?” shows they don’t aspire to knowledge and are incapable of justification. “No!” © 2017 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23298 - Posted: 03.01.2017

Scientists who spent years listening to the communication calls of one of our closest ape relatives say their eavesdropping has shed light on the origin of human language. Dr Adriano Reis e Lameira from Durham University recorded and analysed almost 5,000 orangutan "kiss squeaks". He found that the animals combined these purse-lipped, "consonant-like" calls to convey different messages. This could be a glimpse of how our ancestors formed the earliest words. The findings are published in the journal Nature Human Behaviour. "Human language is extraordinarily advanced and complex - we can pretty much transmit any information we want into sound," said Dr Reis e Lameira. "So we tend to think that maybe words evolved from some rudimentary precursor to transmit more complex messages. "We were basically using the orangutan vocal behaviour as a time machine - back to a time when our ancestors were using what would become [those precursors] of consonants and vowels." The team studied kiss squeaks in particular because, like many consonants - the /t/, /p/, /k/ sounds - they depend on the action of the lips, tongue and jaw rather than the voice. "Kiss squeaks do not involve vocal fold action, so they're acoustically and articulatory consonant-like," explained Dr Reis e Lameira. In comparison to research into vowel-like primate calls, the scientists explained, the study of consonants in the evolution of language has been more difficult. But as Prof Serge Wich from Liverpool John Moores University, a lead author in the study, said, they are crucial "building blocks" in the evolution of language. "Most human languages have a lot more consonants than vowels," said Prof Wich. "And if we have more building blocks, we have more combinations." © 2017 BBC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23204 - Posted: 02.09.2017

Erin Hare One chilly day in February 1877, a British cotton baron named Joseph Sidebotham heard what he thought was a canary warbling near his hotel window. He was vacationing with his family in France, and soon realized the tune wasn’t coming from outside. “The singing was in our salon,” he wrote of the incident in Nature. “The songster was a mouse.” The family fed the creature bits of biscuit, and it quickly became comfortable enough to climb onto the warm hearth at night and regale them with songs. It would sing for hours. Clearly, Sidebotham concluded, this was no ordinary mouse. More than a century later, however, scientists discovered he was wrong. It turns out that all mice chitter away to each other. Their language is usually just too high-pitched for human ears to detect. Today, mouse songs are no mere curiosity. Researchers are able to engineer mice to express genetic mutations associated with human speech disorders, and then measure the changes in the animals’ songs. They’re leveraging these beautifully complex vocalizations to uncover the mysteries of human speech. Anecdotal accounts of singing mice date back to 1843. In the journal The Zoologist, the British entomologist and botanist Edward Newman wrote that the song of a rare “murine Orpheus” sounds as “if the mouth of a canary were carefully closed, and the bird, in revenge, were to turn ventriloquist, and sing in the very centre of his stomach.” © 2017 by The Atlantic Monthly Group.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23162 - Posted: 01.28.2017

By Avi Selk “Oh Long Johnson,” a cat once said, back in the primordial history of Internet memes. “Oh Don Piano. Why I eyes ya.” Or so said the captions — appended to the gibberish of a perturbed house cat on “America's Funniest Home Videos” in 1999 and rediscovered in the YouTube era, when millions of people heard something vaguely human echo in a distant species. It was weird. And hilarious. And just maybe, profound. As the “Oh Long Johnson” craze was fading a few years ago, a wave of scientific discoveries about apes and monkeys began upending old assumptions about the origins of language. Only humans could willfully control their vocal tracts, went the established wisdom. Until Koko the gorilla coughed on command. Surely, then, our vowels were ours alone. But this month, researchers picked up British ohs in the babble of baboons. Study after study is dismantling a hypothesis that has stood for decades: that the seeds of language did not exist before modern humans, who got all the way to Shakespeare from scratch. And if so much of what we thought we knew about the uniqueness of human speech was wrong, some think it's time to take a second look at talking pet tricks. “It's humbling to understand that humans, in the end, are just another species of primate,” said Marcus Perlman, who led the Koko study in 2015. © 1996-2017 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23122 - Posted: 01.19.2017

By Helen Briggs BBC News Babies build knowledge about the language they hear even in the first few months of life, research shows. If you move countries and forget your birth language, you retain this hidden ability, according to a study. Dutch-speaking adults adopted from South Korea exceeded expectations at Korean pronunciation when retrained after losing their birth language. Scientists say parents should talk to babies as much as possible in early life. Dr Jiyoun Choi of Hanyang University in Seoul led the research. The study is the first to show that the early experience of adopted children in their birth language gives them an advantage decades later even if they think it is forgotten, she said. ''This finding indicates that useful language knowledge is laid down in [the] very early months of life, which can be retained without further input of the language and revealed via re-learning,'' she told BBC News. In the study, adults aged about 30 who had been adopted as babies by Dutch-speaking families were asked to pronounce Korean consonants after a short training course. Korean consonants are unlike those spoken in Dutch. The participants were compared with a group of adults who had not been exposed to the Korean language as children and then rated by native Korean speakers. Both groups performed to the same level before training, but after training the international adoptees exceeded expectations. There was no difference between children who were adopted under six months of age - before they could speak - and those who were adopted after 17 months, when they had learned to talk. This suggests that the language knowledge retained is abstract in nature, rather than dependent on the amount of experience. © 2017 BBC

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23118 - Posted: 01.18.2017

By Tanya Lewis To the untrained listener, a bunch of babbling baboons may not sound like much. But sharp-eared experts have now found that our primate cousins can actually produce humanlike vowel sounds. The finding suggests the last common ancestor of humans and baboons may have possessed the vocal machinery for speech—hinting at a much earlier origin for language than previously thought. Researchers from the National Center for Scientific Research (CNRS) and Grenoble Alpes University, both in France, and their colleagues recorded baboons in captivity, finding the animals were capable of producing five distinct sounds that have the same characteristic frequencies as human vowels. As reported today in PLoS ONE, the animals could make these sounds despite the fact that, as dissections later revealed, they possess high voice boxes, or larynxes, an anatomical feature long thought to be an impediment to speech. “This breaks a serious logjam” in the study of language, says study co-author Thomas Sawallis, a linguist at the University of Alabama. “Theories of language evolution have developed based on the idea that full speech was only available to anatomically modern Homo sapiens,” approximately 70,000 to 100,000 years ago, he says, but in fact, “we could have had the beginnings of speech 25 million years ago.” The evolution of language is considered one of the hardest problems in science, because the process left no fossil evidence behind. One practical approach, however, is to study the mechanics of speech. Language consists roughly of different combinations of vowels and consonants. Notably, humans possess low larynxes, which makes it easier to produce a wide range of vowel sounds (and as Darwin observed, also makes it easier for us to choke on food). A foundational theory of speech production, developed by Brown University cognitive scientist Philip Lieberman in the 1960s, states the high larynxes and thus shorter vocal tracts of most nonhuman primates prevents them from producing vowel-like sounds. Yet recent research calls Lieberman’s hypothesis into question. © 2017 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23089 - Posted: 01.12.2017

By Veronique Greenwood Babies' ability to soak up language makes them the envy of adult learners everywhere. Still, some grown-ups can acquire new tongues with surprising ease. Now some studies suggest it is possible to predict a person's language-learning abilities from his or her brain structure or activity—results that may eventually be used to help even the most linguistically challenged succeed. In one study, published in 2015 in the Journal of Neurolinguistics, a team of researchers looked at the structure of neuron fibers in white matter in 22 beginning Mandarin students. Those who had more spatially aligned fibers in their right hemisphere had higher test scores after four weeks of classes, the scientists found. Like a freeway express lane, highly aligned fibers are thought to speed the transfer of information within the brain. Although language is traditionally associated with the left hemisphere, the right, which seems to be involved in pitch perception, may play a role in distinguishing the tones of Mandarin, speculates study author Zhenghan Qi of the Massachusetts Institute of Technology. Wired for Learning Your ability to learn a new language may be influenced by brain wiring. Diffusion tensor imaging of native English speakers learning Mandarin reveals that people who learn better have more aligned nerve fibers (shown with warmer colors) in two regions in the right hemisphere (A and B). In this case, subject 2, who has more aligned fibers, was a more successful learner than subject 1. © 2016 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23019 - Posted: 12.26.2016

Ramin Skibba The high-pitched squeals of the humble bat may be as complex as the calls of dolphins and monkeys, researchers have found. A study published on 22 December in Scientific Reports1 reveals that the fruit bat is one of only a few animals known to direct its calls at specific individuals in a colony, and suggests that information in the calls of many social animals may be more detailed than was previously thought. Bats are noisy creatures, especially in their crowded caves, where they make calls to their neighbours. “If you go into a fruit-bat cave, you hear a cacophony,” says Yossi Yovel, a neuroecologist at Tel Aviv University in Israel who led the study. Until now, it has been difficult to separate this noise into distinct sounds, or to determine what prompted the individual to make a particular call. “Animals make sounds for a reason,” says Whitlow Au, a marine-bioacoustics scientist at the University of Hawaii at Manoa. “Most of the time, we don’t quite understand those reasons.” To find out what bats are talking about, Yovel and his colleagues monitored 22 captive Egyptian fruit bats (Rousettus aegyptiacus) around the clock for 75 days. They modified a voice-recognition program to analyse approximately 15,000 vocalizations collected during this time. The program was able to tie specific sounds to different social interactions captured by video, such as when two bats fought over food. © 2016 Macmillan Publishers

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23009 - Posted: 12.23.2016