Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 592

By Cormac McCarthy I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.” Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter. A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal. All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems. © 2017 NautilusThink Inc,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 23525 - Posted: 04.22.2017

by Laura Sanders The way babies learn to speak is nothing short of breathtaking. Their brains are learning the differences between sounds, rehearsing mouth movements and mastering vocabulary by putting words into meaningful context. It’s a lot to fit in between naps and diaper changes. A recent study shows just how durable this early language learning is. Dutch-speaking adults who were adopted from South Korea as preverbal babies held on to latent Korean language skills, researchers report online January 18 in Royal Society Open Science. In the first months of their lives, these people had already laid down the foundation for speaking Korean — a foundation that persisted for decades undetected, only revealing itself later in careful laboratory tests. Researchers tested how well people could learn to identify and speak tricky Korean sounds. “For Korean listeners, these sounds are easy to distinguish, but for second-language learners they are very difficult to master,” says study coauthor Mirjam Broersma, a psycholinguist of Radboud University in Nijmegen, Netherlands. For instance, a native Dutch speaker would listen to three distinct Korean sounds and hear only the same “t” sound. Broersma and her colleagues compared the language-absorbing skills of a group of 29 native Dutch speakers to 29 South Korea-born Dutch speakers. Half of the adoptees moved to the Netherlands when they were older than 17 months — ages at which the kids had probably begun talking. The other half were adopted as preverbal babies younger than 6 months. As a group, the South Korea-born adults outperformed the native-born Dutch adults, more easily learning both to recognize and speak the Korean sounds. |© Society for Science & the Public 2000 - 2017

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23455 - Posted: 04.06.2017

By Matt Reynolds Google’s latest take on machine translation could make it easier for people to communicate with those speaking a different language, by translating speech directly into text in a language they understand. Machine translation of speech normally works by first converting it into text, then translating that into text in another language. But any error in speech recognition will lead to an error in transcription and a mistake in the translation. Researchers at Google Brain, the tech giant’s deep learning research arm, have turned to neural networks to cut out the middle step. By skipping transcription, the approach could potentially allow for more accurate and quicker translations. The team trained its system on hundreds of hours of Spanish audio with corresponding English text. In each case, it used several layers of neural networks – computer systems loosely modelled on the human brain – to match sections of the spoken Spanish with the written translation. To do this, it analysed the waveform of the Spanish audio to learn which parts seemed to correspond with which chunks of written English. When it was then asked to translate, each neural layer used this knowledge to manipulate the audio waveform until it was turned into the corresponding section of written English. “It learns to find patterns of correspondence between the waveforms in the source language and the written text,” says Dzmitry Bahdanau at the University of Montreal in Canada, who wasn’t involved with the work. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23450 - Posted: 04.05.2017

By Timothy Revell Who would you get to observe differences in how men, women and children interact? A robot in a fur-lined hat, of course. Experiments using a robotic head, called Furhat, aimed to uncover inequalities in people’s participation when working on a shared activity, and see if a robot could help redress the balance. They revealed that when a woman is paired in conversation with another woman, she speaks more than if paired with a man. And two men paired together speak less than two women. But this only holds for adults. “Surprisingly, we didn’t find this same pattern for boys and girls. Gender didn’t make much difference to how much children speak,” says Gabriel Skantze at the KTH Royal Institute of Technology in Stockholm, Sweden, who is also one of the robot’s creators. Furhat interacted with 540 visitors at the Swedish National Museum of Science and Technology over nine days. Two people at a time would sit at an interactive table with a touchscreen opposite the robot. They were asked to play a game that involved sorting a set of virtual picture cards, such as arranging images of historical inventions in chronological order. The people worked with the robot to try to solve the task. During this time, the robot’s sensors tracked how long each person spoke for. “This turned out to be a really nice opportunity to study the differences between men and women, and adults and children,” says Skantze. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 8: Hormones and Sex
Link ID: 23367 - Posted: 03.17.2017

Jon Hamilton An orangutan named Rocky is helping scientists figure out when early humans might have uttered the first word. Rocky, who is 12 and lives at the Indianapolis Zoo, has shown that he can control his vocal cords much the way people do. He can learn new vocal sounds and even match the pitch of sounds made by a person. "Rocky, and probably other great apes, can do things with their vocal apparatus that, for decades, people have asserted was impossible," says Rob Shumaker, the zoo's director, who has studied orangutans for more than 30 years. Rocky's abilities suggest that our human ancestors could have begun speaking 10 million years ago, about the time humans and great apes diverged, Shumaker says. Until now, many scientists thought that speech required changes in the brain and vocal apparatus that evolved more recently, during the past 2 million years. The vocal abilities of orangutans might have gone undetected had it not been for Rocky, an ape with an unusual past and a rare relationship with people. Rocky was separated from his mother soon after he was born, and spent his early years raised largely by people, and working in show business. "He was certainly the most visible orangutan in entertainment at the time," says Shumaker. "TV commercials, things like that."

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23354 - Posted: 03.14.2017

Bruce Bower The social lives of macaques and baboons play out in what primatologist Julia Fischer calls “a magnificent opera.” When young Barbary macaques reach about 6 months, they fight nightly with their mothers. Young ones want the “maternal embrace” as they snooze; mothers want precious alone time. Getting pushed away and bitten by dear old mom doesn’t deter young macaques. But they’re on their own when a new brother or sister comes along. In Monkeytalk, Fischer describes how the monkey species she studies have evolved their own forms of intelligence and communication. Connections exist between monkey and human minds, but Fischer regards differences among primate species as particularly compelling. She connects lab studies of monkeys and apes to her observations of wild monkeys while mixing in offbeat personal anecdotes of life in the field. Fischer catapulted into a career chasing down monkeys in 1993. While still in college, she monitored captive Barbary macaques. That led to fieldwork among wild macaques in Morocco. In macaque communities, females hold central roles because young males move to other groups to mate. Members of closely related, cooperative female clans gain an edge in competing for status with male newcomers. Still, adult males typically outrank females. Fischer describes how the monkeys strategically alternate between attacking and forging alliances. After forging her own key scientific alliances, Fischer moved on to study baboons in Africa, where she entered the bureaucratic jungle. Obtaining papers for a car in Senegal, for instance, took Fischer several days. She first had to shop for a snazzy outfit to impress male paper-pushers, she says. |© Society for Science & the Public 2000 - 2017.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23315 - Posted: 03.06.2017

By Hanoch Ben-Yami Human intelligence, even in its most basic forms, is expressed in our language, and is also partly dependent on our linguistic capacity. Homer, Darwin and Einstein could obviously not have achieved what they did without language—but neither could a child in kindergarten. And this raises an important question about animal intelligence. Although we don’t expect a chimpanzee to write an epic or a dolphin to develop a scientific theory, it has frequently been asked whether these or other animals are close in intelligence to children in young children. If so, we must wonder whether animals can acquire a language. In the last half century, much effort has been put trying answer that question by teaching animals, primarily apes, a basic language. There have been some limited successes, with animals using signs to obtain things in which they were interested, for instance. But no animal has yet acquired the linguistic capability that children have already in their third year of life. “Why?” This is a question children start asking during by the age of three at the latest. No animal has yet asked anything. “Why?” is a very important question: it shows that those asking it are aware they don’t know something they wish to know. Understanding the why-question is also necessary for the ability to justify our actions and thoughts. The fact that animals don’t ask “why?” shows they don’t aspire to knowledge and are incapable of justification. “No!” © 2017 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23298 - Posted: 03.01.2017

Scientists who spent years listening to the communication calls of one of our closest ape relatives say their eavesdropping has shed light on the origin of human language. Dr Adriano Reis e Lameira from Durham University recorded and analysed almost 5,000 orangutan "kiss squeaks". He found that the animals combined these purse-lipped, "consonant-like" calls to convey different messages. This could be a glimpse of how our ancestors formed the earliest words. The findings are published in the journal Nature Human Behaviour. "Human language is extraordinarily advanced and complex - we can pretty much transmit any information we want into sound," said Dr Reis e Lameira. "So we tend to think that maybe words evolved from some rudimentary precursor to transmit more complex messages. "We were basically using the orangutan vocal behaviour as a time machine - back to a time when our ancestors were using what would become [those precursors] of consonants and vowels." The team studied kiss squeaks in particular because, like many consonants - the /t/, /p/, /k/ sounds - they depend on the action of the lips, tongue and jaw rather than the voice. "Kiss squeaks do not involve vocal fold action, so they're acoustically and articulatory consonant-like," explained Dr Reis e Lameira. In comparison to research into vowel-like primate calls, the scientists explained, the study of consonants in the evolution of language has been more difficult. But as Prof Serge Wich from Liverpool John Moores University, a lead author in the study, said, they are crucial "building blocks" in the evolution of language. "Most human languages have a lot more consonants than vowels," said Prof Wich. "And if we have more building blocks, we have more combinations." © 2017 BBC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23204 - Posted: 02.09.2017

Erin Hare One chilly day in February 1877, a British cotton baron named Joseph Sidebotham heard what he thought was a canary warbling near his hotel window. He was vacationing with his family in France, and soon realized the tune wasn’t coming from outside. “The singing was in our salon,” he wrote of the incident in Nature. “The songster was a mouse.” The family fed the creature bits of biscuit, and it quickly became comfortable enough to climb onto the warm hearth at night and regale them with songs. It would sing for hours. Clearly, Sidebotham concluded, this was no ordinary mouse. More than a century later, however, scientists discovered he was wrong. It turns out that all mice chitter away to each other. Their language is usually just too high-pitched for human ears to detect. Today, mouse songs are no mere curiosity. Researchers are able to engineer mice to express genetic mutations associated with human speech disorders, and then measure the changes in the animals’ songs. They’re leveraging these beautifully complex vocalizations to uncover the mysteries of human speech. Anecdotal accounts of singing mice date back to 1843. In the journal The Zoologist, the British entomologist and botanist Edward Newman wrote that the song of a rare “murine Orpheus” sounds as “if the mouth of a canary were carefully closed, and the bird, in revenge, were to turn ventriloquist, and sing in the very centre of his stomach.” © 2017 by The Atlantic Monthly Group.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23162 - Posted: 01.28.2017

By Avi Selk “Oh Long Johnson,” a cat once said, back in the primordial history of Internet memes. “Oh Don Piano. Why I eyes ya.” Or so said the captions — appended to the gibberish of a perturbed house cat on “America's Funniest Home Videos” in 1999 and rediscovered in the YouTube era, when millions of people heard something vaguely human echo in a distant species. It was weird. And hilarious. And just maybe, profound. As the “Oh Long Johnson” craze was fading a few years ago, a wave of scientific discoveries about apes and monkeys began upending old assumptions about the origins of language. Only humans could willfully control their vocal tracts, went the established wisdom. Until Koko the gorilla coughed on command. Surely, then, our vowels were ours alone. But this month, researchers picked up British ohs in the babble of baboons. Study after study is dismantling a hypothesis that has stood for decades: that the seeds of language did not exist before modern humans, who got all the way to Shakespeare from scratch. And if so much of what we thought we knew about the uniqueness of human speech was wrong, some think it's time to take a second look at talking pet tricks. “It's humbling to understand that humans, in the end, are just another species of primate,” said Marcus Perlman, who led the Koko study in 2015. © 1996-2017 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23122 - Posted: 01.19.2017

By Helen Briggs BBC News Babies build knowledge about the language they hear even in the first few months of life, research shows. If you move countries and forget your birth language, you retain this hidden ability, according to a study. Dutch-speaking adults adopted from South Korea exceeded expectations at Korean pronunciation when retrained after losing their birth language. Scientists say parents should talk to babies as much as possible in early life. Dr Jiyoun Choi of Hanyang University in Seoul led the research. The study is the first to show that the early experience of adopted children in their birth language gives them an advantage decades later even if they think it is forgotten, she said. ''This finding indicates that useful language knowledge is laid down in [the] very early months of life, which can be retained without further input of the language and revealed via re-learning,'' she told BBC News. In the study, adults aged about 30 who had been adopted as babies by Dutch-speaking families were asked to pronounce Korean consonants after a short training course. Korean consonants are unlike those spoken in Dutch. The participants were compared with a group of adults who had not been exposed to the Korean language as children and then rated by native Korean speakers. Both groups performed to the same level before training, but after training the international adoptees exceeded expectations. There was no difference between children who were adopted under six months of age - before they could speak - and those who were adopted after 17 months, when they had learned to talk. This suggests that the language knowledge retained is abstract in nature, rather than dependent on the amount of experience. © 2017 BBC

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23118 - Posted: 01.18.2017

By Tanya Lewis To the untrained listener, a bunch of babbling baboons may not sound like much. But sharp-eared experts have now found that our primate cousins can actually produce humanlike vowel sounds. The finding suggests the last common ancestor of humans and baboons may have possessed the vocal machinery for speech—hinting at a much earlier origin for language than previously thought. Researchers from the National Center for Scientific Research (CNRS) and Grenoble Alpes University, both in France, and their colleagues recorded baboons in captivity, finding the animals were capable of producing five distinct sounds that have the same characteristic frequencies as human vowels. As reported today in PLoS ONE, the animals could make these sounds despite the fact that, as dissections later revealed, they possess high voice boxes, or larynxes, an anatomical feature long thought to be an impediment to speech. “This breaks a serious logjam” in the study of language, says study co-author Thomas Sawallis, a linguist at the University of Alabama. “Theories of language evolution have developed based on the idea that full speech was only available to anatomically modern Homo sapiens,” approximately 70,000 to 100,000 years ago, he says, but in fact, “we could have had the beginnings of speech 25 million years ago.” The evolution of language is considered one of the hardest problems in science, because the process left no fossil evidence behind. One practical approach, however, is to study the mechanics of speech. Language consists roughly of different combinations of vowels and consonants. Notably, humans possess low larynxes, which makes it easier to produce a wide range of vowel sounds (and as Darwin observed, also makes it easier for us to choke on food). A foundational theory of speech production, developed by Brown University cognitive scientist Philip Lieberman in the 1960s, states the high larynxes and thus shorter vocal tracts of most nonhuman primates prevents them from producing vowel-like sounds. Yet recent research calls Lieberman’s hypothesis into question. © 2017 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23089 - Posted: 01.12.2017

By Veronique Greenwood Babies' ability to soak up language makes them the envy of adult learners everywhere. Still, some grown-ups can acquire new tongues with surprising ease. Now some studies suggest it is possible to predict a person's language-learning abilities from his or her brain structure or activity—results that may eventually be used to help even the most linguistically challenged succeed. In one study, published in 2015 in the Journal of Neurolinguistics, a team of researchers looked at the structure of neuron fibers in white matter in 22 beginning Mandarin students. Those who had more spatially aligned fibers in their right hemisphere had higher test scores after four weeks of classes, the scientists found. Like a freeway express lane, highly aligned fibers are thought to speed the transfer of information within the brain. Although language is traditionally associated with the left hemisphere, the right, which seems to be involved in pitch perception, may play a role in distinguishing the tones of Mandarin, speculates study author Zhenghan Qi of the Massachusetts Institute of Technology. Wired for Learning Your ability to learn a new language may be influenced by brain wiring. Diffusion tensor imaging of native English speakers learning Mandarin reveals that people who learn better have more aligned nerve fibers (shown with warmer colors) in two regions in the right hemisphere (A and B). In this case, subject 2, who has more aligned fibers, was a more successful learner than subject 1. © 2016 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23019 - Posted: 12.26.2016

Ramin Skibba The high-pitched squeals of the humble bat may be as complex as the calls of dolphins and monkeys, researchers have found. A study published on 22 December in Scientific Reports1 reveals that the fruit bat is one of only a few animals known to direct its calls at specific individuals in a colony, and suggests that information in the calls of many social animals may be more detailed than was previously thought. Bats are noisy creatures, especially in their crowded caves, where they make calls to their neighbours. “If you go into a fruit-bat cave, you hear a cacophony,” says Yossi Yovel, a neuroecologist at Tel Aviv University in Israel who led the study. Until now, it has been difficult to separate this noise into distinct sounds, or to determine what prompted the individual to make a particular call. “Animals make sounds for a reason,” says Whitlow Au, a marine-bioacoustics scientist at the University of Hawaii at Manoa. “Most of the time, we don’t quite understand those reasons.” To find out what bats are talking about, Yovel and his colleagues monitored 22 captive Egyptian fruit bats (Rousettus aegyptiacus) around the clock for 75 days. They modified a voice-recognition program to analyse approximately 15,000 vocalizations collected during this time. The program was able to tie specific sounds to different social interactions captured by video, such as when two bats fought over food. © 2016 Macmillan Publishers

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23009 - Posted: 12.23.2016

By Veronique Greenwood Baffling grammar, strange vowels, quirky idioms and so many new words—all of this makes learning a new language hard work. Luckily, researchers have discovered a number of helpful tricks, ranging from exposing your ears to a variety of native speakers to going to sleep soon after a practice session. A pair of recent papers suggests that even when you are not actively studying, what you hear can affect your learning and that sometimes listening without speaking works best. In one study, published in 2015 in the Journal of the Acoustical Society of America, linguists found that people who took breaks from learning new sounds performed just as well as those who took no breaks, as long as the sounds continued to play in the background. The researchers trained two groups of people to distinguish among trios of similar sounds—for instance, Hindi has “p,” “b” and a third sound English speakers mistake for “b.” One group practiced telling these apart one hour a day for two days. Another group alternated between 10 minutes of the task and 10 minutes of a “distractor” task that involved matching symbols on a worksheet while the sounds continued to play in the background. Remarkably, the group that switched between tasks improved just as much as the one that focused on the distinguishing task the entire time. “There's something about our brains that makes it possible to take advantage of the things you've already paid attention to and to keep paying attention to them,” even when you are focused on something else, suggests Melissa Baese-Berk, a linguist at the University of Oregon and a co-author of the study. In a 2016 study published in the Journal of Memory and Language, Baese-Berk and another colleague found that it is better to listen to new sounds silently rather than practice saying them yourself at the same time. Spanish speakers learning to distinguish among sounds in the Basque language performed more poorly when they were asked to repeat one of the sounds during training. The findings square with what many teachers have intuited—that a combination of focused practice and passive exposure to a language is the best approach. “You need to come to class and pay attention,” Baese-Berk says, “but when you go home, turn on the TV or turn on the radio in that language while you're cooking dinner, and even if you're not paying total attention to it, it's going to help you.” © 2016 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 22982 - Posted: 12.13.2016

Carl Zimmer Primates are unquestionably clever: Monkeys can learn how to use money, and chimpanzees have a knack for game theory. But no one has ever taught a nonhuman primate to say “hello.” Scientists have long been intrigued by the failure of primates to talk like us. Understanding the reasons may offer clues to how our own ancestors evolved full-blown speech, one of our most powerful adaptations. On Friday, a team of researchers reported that monkeys have a vocal tract capable of human speech. They argue that other primates can’t talk because they lack the right wiring in their brains. “A monkey’s vocal tract would be perfectly adequate to produce hundreds, thousands of words,” said W. Tecumseh Fitch, a cognitive scientist at the University of Vienna and a co-author of the new study. Human speech results from a complicated choreography of flowing air and contracting muscles. To make a particular sound, we have to give the vocal tract a particular shape. The vocal tracts of other primates contain the same elements as ours — from vocal cords to tongues to lips — but their geometry is different. That difference long ago set scientists to debating whether primates could make speechlike sounds. In the 1960s, Philip H. Lieberman, now a professor emeritus of Brown University, and his colleagues went so far as to pack a dead monkey’s vocal tract with plaster to get a three-dimensional rendering. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22975 - Posted: 12.10.2016

By Michael Price The famed parrot Alex had a vocabulary of more than 100 words. Kosik the elephant learned to “speak” a bit of Korean by using the tip of his trunk the way people whistle with their fingers. So it’s puzzling that our closest primate cousins are limited to hoots, coos, and grunts. For decades, monkeys’ and apes’ vocal anatomy has been blamed for their inability to reproduce human speech sounds, but a new study suggests macaque monkeys—and by extension, other primates—could indeed talk if they only possessed the brain wiring to do so. The findings might provide new clues to anthropologists and language researchers looking to pin down when humans learned to speak. “This certainly shows that the macaque vocal tract is capable of a lot more than has previously been assumed,” says John Esling, a linguist and phonetics expert at the University of Victoria in Canada, who was not involved with the work. The study’s lead author, William Tecumseh Sherman Fitch III, an evolutionary biologist and cognitive scientist at the University of Vienna, says the question of why monkeys and apes can’t speak goes back to Darwin. (Yes, Fitch is the great-great-great-grandson of U.S. Civil War General William Tecumseh Sherman.) Darwin thought nonhuman primates couldn’t talk because they didn’t have the brains, he says. But over time, anthropologists instead embraced the idea that the primates’ vocal tracts were holding them back: They simply lacked the flexibility to produce the wide range of vowels present in human speech. That remains the “textbook answer” today, Fitch says. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22974 - Posted: 12.10.2016

Anya Kamenetz Brains, brains, brains. One thing we've learned at NPR Ed is that people are fascinated by brain research. And yet it can be hard to point to places where our education system is really making use of the latest neuroscience findings. But there is one happy nexus where research is meeting practice: bilingual education. "In the last 20 years or so, there's been a virtual explosion of research on bilingualism," says Judith Kroll, a professor at the University of California, Riverside. Again and again, researchers have found, "bilingualism is an experience that shapes our brain for a lifetime," in the words of Gigi Luk, an associate professor at Harvard's Graduate School of Education. At the same time, one of the hottest trends in public schooling is what's often called dual-language or two-way immersion programs. Traditional programs for English-language learners, or ELLs, focus on assimilating students into English as quickly as possible. Dual-language classrooms, by contrast, provide instruction across subjects to both English natives and English learners, in both English and in a target language. The goal is functional bilingualism and biliteracy for all students by middle school. New York City, North Carolina, Delaware, Utah, Oregon and Washington state are among the places expanding dual-language classrooms. © 2016 npr

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 22934 - Posted: 11.30.2016

By John Horgan Asked for a comment on the language-acquisition theory of Noam Chomsky (in photo above), psychologist Steven Pinker says: “Chomsky has been a piñata, where anyone who finds some evidence that some aspect of language is learned (and there are plenty), or some grammatical phenomenon varies from language to language, claims to have slain the king. It has not been a scientifically productive debate, unfortunately.” Credit: Ministerio de Cultura de la Nación Argentina Flickr (CC BY-SA 2.0) Noam Chomsky’s political views attract so much attention that it’s easy to forget he’s a scientist, one of the most influential who ever lived. Beginning in the 1950s, Chomsky contended that all humans possess an innate capacity for language, activated in infancy by minimal environmental stimuli. He has elaborated and revised his theory of language acquisition ever since. Chomsky’s ideas have profoundly affected linguistics and mind-science in general. Critics attacked his theories from the get-go and are still attacking, paradoxically demonstrating his enduring dominance. Some attacks are silly. For example, in his new book A Kingdom of Speech Tom Wolfe asserts that both Darwin and “Noam Charisma” were wrong. (See journalist Charles Mann’s evisceration of Wolfe.) © 2016 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22927 - Posted: 11.29.2016

By Felicity Muth Kirsty Graham is a PhD student at the University of St Andrews, Scotland, who works on gestural communication of chimpanzees and bonobos in Uganda and DRCongo. I recently asked her some questions about the work that she does and some exciting recent findings of hers about how these animals communicate. How did you become interested in communication, and specifically gestures? Languages are fascinating – the diversity, the culture, the learning – and during undergrad, I became interested in the origins of our language ability. I went to Quest University Canada (a small liberal arts university) and learned that I could combine my love of languages and animals and being outdoors! Other great apes don’t have language in the way that humans do, but studying different aspects of communication, such as gestures, may reveal how language evolved. Although my interest really started from an interest in languages, once you get so deep into studying other species you become excited about their behaviour for its own sake. In the long run, it would be nice to piece together how language evolved, but for now I’m starting with a very small piece of the puzzle – bonobo gestures. How do you study gestures in non-human primates? There are a few different approaches to studying gestures: in the wild or in captivity; through observation or with experiments; studying one gesture in detail or looking at the whole repertoire. I chose to observe wild bonobos and look at their whole repertoire. Since not much is known about bonobo gestural communication, this seemed like a good starting point. During my PhD, I spent 12 months at Wamba (Kyoto University’s research site) in the DRCongo. I filmed the bonobos, anticipating the beginning of social interactions so that I could record the gestures that they use. Then I spent a long time watching the videos, finding gestures, and coding information about the gestures. © 2016 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22840 - Posted: 11.07.2016