Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 601

By Tanya Lewis To the untrained listener, a bunch of babbling baboons may not sound like much. But sharp-eared experts have now found that our primate cousins can actually produce humanlike vowel sounds. The finding suggests the last common ancestor of humans and baboons may have possessed the vocal machinery for speech—hinting at a much earlier origin for language than previously thought. Researchers from the National Center for Scientific Research (CNRS) and Grenoble Alpes University, both in France, and their colleagues recorded baboons in captivity, finding the animals were capable of producing five distinct sounds that have the same characteristic frequencies as human vowels. As reported today in PLoS ONE, the animals could make these sounds despite the fact that, as dissections later revealed, they possess high voice boxes, or larynxes, an anatomical feature long thought to be an impediment to speech. “This breaks a serious logjam” in the study of language, says study co-author Thomas Sawallis, a linguist at the University of Alabama. “Theories of language evolution have developed based on the idea that full speech was only available to anatomically modern Homo sapiens,” approximately 70,000 to 100,000 years ago, he says, but in fact, “we could have had the beginnings of speech 25 million years ago.” The evolution of language is considered one of the hardest problems in science, because the process left no fossil evidence behind. One practical approach, however, is to study the mechanics of speech. Language consists roughly of different combinations of vowels and consonants. Notably, humans possess low larynxes, which makes it easier to produce a wide range of vowel sounds (and as Darwin observed, also makes it easier for us to choke on food). A foundational theory of speech production, developed by Brown University cognitive scientist Philip Lieberman in the 1960s, states the high larynxes and thus shorter vocal tracts of most nonhuman primates prevents them from producing vowel-like sounds. Yet recent research calls Lieberman’s hypothesis into question. © 2017 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23089 - Posted: 01.12.2017

By Veronique Greenwood Babies' ability to soak up language makes them the envy of adult learners everywhere. Still, some grown-ups can acquire new tongues with surprising ease. Now some studies suggest it is possible to predict a person's language-learning abilities from his or her brain structure or activity—results that may eventually be used to help even the most linguistically challenged succeed. In one study, published in 2015 in the Journal of Neurolinguistics, a team of researchers looked at the structure of neuron fibers in white matter in 22 beginning Mandarin students. Those who had more spatially aligned fibers in their right hemisphere had higher test scores after four weeks of classes, the scientists found. Like a freeway express lane, highly aligned fibers are thought to speed the transfer of information within the brain. Although language is traditionally associated with the left hemisphere, the right, which seems to be involved in pitch perception, may play a role in distinguishing the tones of Mandarin, speculates study author Zhenghan Qi of the Massachusetts Institute of Technology. Wired for Learning Your ability to learn a new language may be influenced by brain wiring. Diffusion tensor imaging of native English speakers learning Mandarin reveals that people who learn better have more aligned nerve fibers (shown with warmer colors) in two regions in the right hemisphere (A and B). In this case, subject 2, who has more aligned fibers, was a more successful learner than subject 1. © 2016 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23019 - Posted: 12.26.2016

Ramin Skibba The high-pitched squeals of the humble bat may be as complex as the calls of dolphins and monkeys, researchers have found. A study published on 22 December in Scientific Reports1 reveals that the fruit bat is one of only a few animals known to direct its calls at specific individuals in a colony, and suggests that information in the calls of many social animals may be more detailed than was previously thought. Bats are noisy creatures, especially in their crowded caves, where they make calls to their neighbours. “If you go into a fruit-bat cave, you hear a cacophony,” says Yossi Yovel, a neuroecologist at Tel Aviv University in Israel who led the study. Until now, it has been difficult to separate this noise into distinct sounds, or to determine what prompted the individual to make a particular call. “Animals make sounds for a reason,” says Whitlow Au, a marine-bioacoustics scientist at the University of Hawaii at Manoa. “Most of the time, we don’t quite understand those reasons.” To find out what bats are talking about, Yovel and his colleagues monitored 22 captive Egyptian fruit bats (Rousettus aegyptiacus) around the clock for 75 days. They modified a voice-recognition program to analyse approximately 15,000 vocalizations collected during this time. The program was able to tie specific sounds to different social interactions captured by video, such as when two bats fought over food. © 2016 Macmillan Publishers

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23009 - Posted: 12.23.2016

By Veronique Greenwood Baffling grammar, strange vowels, quirky idioms and so many new words—all of this makes learning a new language hard work. Luckily, researchers have discovered a number of helpful tricks, ranging from exposing your ears to a variety of native speakers to going to sleep soon after a practice session. A pair of recent papers suggests that even when you are not actively studying, what you hear can affect your learning and that sometimes listening without speaking works best. In one study, published in 2015 in the Journal of the Acoustical Society of America, linguists found that people who took breaks from learning new sounds performed just as well as those who took no breaks, as long as the sounds continued to play in the background. The researchers trained two groups of people to distinguish among trios of similar sounds—for instance, Hindi has “p,” “b” and a third sound English speakers mistake for “b.” One group practiced telling these apart one hour a day for two days. Another group alternated between 10 minutes of the task and 10 minutes of a “distractor” task that involved matching symbols on a worksheet while the sounds continued to play in the background. Remarkably, the group that switched between tasks improved just as much as the one that focused on the distinguishing task the entire time. “There's something about our brains that makes it possible to take advantage of the things you've already paid attention to and to keep paying attention to them,” even when you are focused on something else, suggests Melissa Baese-Berk, a linguist at the University of Oregon and a co-author of the study. In a 2016 study published in the Journal of Memory and Language, Baese-Berk and another colleague found that it is better to listen to new sounds silently rather than practice saying them yourself at the same time. Spanish speakers learning to distinguish among sounds in the Basque language performed more poorly when they were asked to repeat one of the sounds during training. The findings square with what many teachers have intuited—that a combination of focused practice and passive exposure to a language is the best approach. “You need to come to class and pay attention,” Baese-Berk says, “but when you go home, turn on the TV or turn on the radio in that language while you're cooking dinner, and even if you're not paying total attention to it, it's going to help you.” © 2016 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 22982 - Posted: 12.13.2016

Carl Zimmer Primates are unquestionably clever: Monkeys can learn how to use money, and chimpanzees have a knack for game theory. But no one has ever taught a nonhuman primate to say “hello.” Scientists have long been intrigued by the failure of primates to talk like us. Understanding the reasons may offer clues to how our own ancestors evolved full-blown speech, one of our most powerful adaptations. On Friday, a team of researchers reported that monkeys have a vocal tract capable of human speech. They argue that other primates can’t talk because they lack the right wiring in their brains. “A monkey’s vocal tract would be perfectly adequate to produce hundreds, thousands of words,” said W. Tecumseh Fitch, a cognitive scientist at the University of Vienna and a co-author of the new study. Human speech results from a complicated choreography of flowing air and contracting muscles. To make a particular sound, we have to give the vocal tract a particular shape. The vocal tracts of other primates contain the same elements as ours — from vocal cords to tongues to lips — but their geometry is different. That difference long ago set scientists to debating whether primates could make speechlike sounds. In the 1960s, Philip H. Lieberman, now a professor emeritus of Brown University, and his colleagues went so far as to pack a dead monkey’s vocal tract with plaster to get a three-dimensional rendering. © 2016 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22975 - Posted: 12.10.2016

By Michael Price The famed parrot Alex had a vocabulary of more than 100 words. Kosik the elephant learned to “speak” a bit of Korean by using the tip of his trunk the way people whistle with their fingers. So it’s puzzling that our closest primate cousins are limited to hoots, coos, and grunts. For decades, monkeys’ and apes’ vocal anatomy has been blamed for their inability to reproduce human speech sounds, but a new study suggests macaque monkeys—and by extension, other primates—could indeed talk if they only possessed the brain wiring to do so. The findings might provide new clues to anthropologists and language researchers looking to pin down when humans learned to speak. “This certainly shows that the macaque vocal tract is capable of a lot more than has previously been assumed,” says John Esling, a linguist and phonetics expert at the University of Victoria in Canada, who was not involved with the work. The study’s lead author, William Tecumseh Sherman Fitch III, an evolutionary biologist and cognitive scientist at the University of Vienna, says the question of why monkeys and apes can’t speak goes back to Darwin. (Yes, Fitch is the great-great-great-grandson of U.S. Civil War General William Tecumseh Sherman.) Darwin thought nonhuman primates couldn’t talk because they didn’t have the brains, he says. But over time, anthropologists instead embraced the idea that the primates’ vocal tracts were holding them back: They simply lacked the flexibility to produce the wide range of vowels present in human speech. That remains the “textbook answer” today, Fitch says. © 2016 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22974 - Posted: 12.10.2016

Anya Kamenetz Brains, brains, brains. One thing we've learned at NPR Ed is that people are fascinated by brain research. And yet it can be hard to point to places where our education system is really making use of the latest neuroscience findings. But there is one happy nexus where research is meeting practice: bilingual education. "In the last 20 years or so, there's been a virtual explosion of research on bilingualism," says Judith Kroll, a professor at the University of California, Riverside. Again and again, researchers have found, "bilingualism is an experience that shapes our brain for a lifetime," in the words of Gigi Luk, an associate professor at Harvard's Graduate School of Education. At the same time, one of the hottest trends in public schooling is what's often called dual-language or two-way immersion programs. Traditional programs for English-language learners, or ELLs, focus on assimilating students into English as quickly as possible. Dual-language classrooms, by contrast, provide instruction across subjects to both English natives and English learners, in both English and in a target language. The goal is functional bilingualism and biliteracy for all students by middle school. New York City, North Carolina, Delaware, Utah, Oregon and Washington state are among the places expanding dual-language classrooms. © 2016 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 22934 - Posted: 11.30.2016

By John Horgan Asked for a comment on the language-acquisition theory of Noam Chomsky (in photo above), psychologist Steven Pinker says: “Chomsky has been a piñata, where anyone who finds some evidence that some aspect of language is learned (and there are plenty), or some grammatical phenomenon varies from language to language, claims to have slain the king. It has not been a scientifically productive debate, unfortunately.” Credit: Ministerio de Cultura de la Nación Argentina Flickr (CC BY-SA 2.0) Noam Chomsky’s political views attract so much attention that it’s easy to forget he’s a scientist, one of the most influential who ever lived. Beginning in the 1950s, Chomsky contended that all humans possess an innate capacity for language, activated in infancy by minimal environmental stimuli. He has elaborated and revised his theory of language acquisition ever since. Chomsky’s ideas have profoundly affected linguistics and mind-science in general. Critics attacked his theories from the get-go and are still attacking, paradoxically demonstrating his enduring dominance. Some attacks are silly. For example, in his new book A Kingdom of Speech Tom Wolfe asserts that both Darwin and “Noam Charisma” were wrong. (See journalist Charles Mann’s evisceration of Wolfe.) © 2016 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22927 - Posted: 11.29.2016

By Felicity Muth Kirsty Graham is a PhD student at the University of St Andrews, Scotland, who works on gestural communication of chimpanzees and bonobos in Uganda and DRCongo. I recently asked her some questions about the work that she does and some exciting recent findings of hers about how these animals communicate. How did you become interested in communication, and specifically gestures? Languages are fascinating – the diversity, the culture, the learning – and during undergrad, I became interested in the origins of our language ability. I went to Quest University Canada (a small liberal arts university) and learned that I could combine my love of languages and animals and being outdoors! Other great apes don’t have language in the way that humans do, but studying different aspects of communication, such as gestures, may reveal how language evolved. Although my interest really started from an interest in languages, once you get so deep into studying other species you become excited about their behaviour for its own sake. In the long run, it would be nice to piece together how language evolved, but for now I’m starting with a very small piece of the puzzle – bonobo gestures. How do you study gestures in non-human primates? There are a few different approaches to studying gestures: in the wild or in captivity; through observation or with experiments; studying one gesture in detail or looking at the whole repertoire. I chose to observe wild bonobos and look at their whole repertoire. Since not much is known about bonobo gestural communication, this seemed like a good starting point. During my PhD, I spent 12 months at Wamba (Kyoto University’s research site) in the DRCongo. I filmed the bonobos, anticipating the beginning of social interactions so that I could record the gestures that they use. Then I spent a long time watching the videos, finding gestures, and coding information about the gestures. © 2016 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22840 - Posted: 11.07.2016

By Virginia Morell There will never be a horse like Mr. Ed, the talking equine TV star. But scientists have discovered that the animals can learn to use another human tool for communicating: pointing to symbols. They join a short list of other species, including some primates, dolphins, and pigeons, with this talent. Scientists taught 23 riding horses of various breeds to look at a display board with three icons, representing wearing or not wearing a blanket. Horses could choose between a “no change” symbol or symbols for “blanket on” or “blanket off.” Previously, their owners made this decision for them. Horses are adept at learning and following signals people give them, and it took these equines an average of 10 days to learn to approach and touch the board and to understand the meaning of the symbols. All 23 horses learned the entire task within 14 days. They were then tested in various weather conditions to see whether they could use the board to tell their trainers about their blanket preferences. The scientists report online in Applied Animal Behaviour Science that the horses did not touch the symbols randomly, but made their choices based on the weather. If it was wet, cold, and windy, they touched the "blanket on" icon; horses that were already wearing a blanket nosed the “no change” image. But when the weather was sunny, the animals touched the "blanket off" symbol; those that weren’t blanketed pressed the “no change” icon. The study’s strong results show that the horses understood the consequences of their choices, say the scientists, who hope that other researchers will use their method to ask horses more questions. © 2016 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22684 - Posted: 09.23.2016

By Rajeev Raizada These brain maps show how accurately it was possible to predict neural activation patterns for new, previously unseen sentences, in different regions of the brain. The brighter the area, the higher the accuracy. The most accurate area, which can be seen as the bright yellow strip, is a region in the left side of the brain known as the Superior Temporal Sulcus. This region achieved statistically significant sentence predictions in 11 out of the 14 people whose brains were scanned. Although that was the most accurate region, several other regions, broadly distributed across the brain, also produced significantly accurate sentence predictions Credit: University of Rochester graphic / Andrew Anderson and Xixi Wang. Used with permission Words, like people, can achieve a lot more when they work together than when they stand on their own. Words working together make sentences, and sentences can express meanings that are unboundedly rich. How the human brain represents the meanings of sentences has been an unsolved problem in neuroscience, but my colleagues and I recently published work in the journal Cerebral Cortex that casts some light on the question. Here, my aim is to give a bigger-picture overview of what that work was about, and what it told us that we did not know before. To measure people's brain activation, we used fMRI (functional Magnetic Resonance Imaging). When fMRI studies were first carried out, in the early 1990s, they mostly just asked which parts of the brain "light up,” i.e. which brain areas are active when people perform a given task. © 2016 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 22676 - Posted: 09.21.2016

By JAMES GORMAN Who’s a good dog? Well, that depends on whom you’re asking, of course. But new research suggests that the next time you look at your pup, whether Maltese or mastiff, you might want to choose your words carefully. “Both what we say and how we say it matters to dogs,” said Attila Andics, a research fellow at Eotvos Lorand University in Budapest. Dr. Andics, who studies language and behavior in dogs and humans, along with Adam Miklosi and several other colleagues, reported in a paper to be published in this week’s issue of the journal Science that different parts of dogs’ brains respond to the meaning of a word, and to how the word is said, much as human brains do. Photo A dog waiting for its brain activity to be measured in a magnetic resonance imaging machine for research reported in the journal Science. Credit Enik Kubinyi As with people’s brains, parts of dogs’ left hemisphere react to meaning and parts of the right hemisphere to intonation — the emotional content of a sound. And, perhaps most interesting to dog owners, only a word of praise said in a positive tone really made the reward system of a dog’s brain light up. The experiment itself was something of an achievement. Dr. Andics and his colleagues trained dogs to enter a magnetic resonance imaging machine and lie in a harness while the machine recorded their brain activity. A trainer spoke words in Hungarian — common words of praise used by dog owners like “good boy,” “super” and “well done.” The trainer also tried neutral words like “however” and “nevertheless.” Both the praise words and neutral words were offered in positive and neutral tones. The positive words spoken in a positive tone prompted strong activity in the brain’s reward centers. All the other conditions resulted in significantly less action, and all at the same level. © 2016 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22617 - Posted: 08.31.2016

By Daniel Barron After prepping for the day’s cases, “Mike Brennan,” a 63-year-old cardiology technician, sat down for his morning coffee and paper. On the front page, he discovered something troubling: he could no longer read. No matter how long he stared at a word, its meaning was lost on him. With a history of smoking and hypertension, he worried that he might have had a stroke. So, leaving his coffee, he walked himself down the hall to the emergency department, where neurologists performed a battery of tests to tease out what had happened. Mike still recognized individual letters and, with great difficulty, could sound out small words. But even some simple vocabulary presented problems, for example, he read “desk” as “dish” or “flame” as “thame.” Function words such as prepositions and pronouns gave him particular trouble. Mike couldn’t read, but there was nothing wrong with his eyes. Words heard were no problem. He could recognize colors, faces, and objects. He could speak, move, think and even write normally. Mike had “pure alexia,” meaning he could not read but showed no other impairments. An M.R.I. scan of Mike’s brain revealed a pea-sized stroke in his left inferior occipitotemporal cortex, a region on the brain’s surface just behind the left ear. © 2016 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22586 - Posted: 08.23.2016

By NICHOLAS ST. FLEUR Orangutan hear, orangutan do. Researchers at the Indianapolis Zoo observed an orangutan mimic the pitch and tone of human sounds, for the first time. The finding, which was published Wednesday, provides insight into the evolutionary origin of human speech, the team said. “It really redefines for us what we know about the capabilities of orangutans,” said Rob Shumaker, director of the zoo and an author on the paper. “What we have to consider now is the possibility that the origins of spoken language are not exclusively human, and that they may have come from great apes.” Rocky, an 11-year-old orangutan at the zoo, has a special ability. He can make sounds using his vocal folds, or voice box, that resemble the vowel “A,” and sound like “Ah.” The noises, or “wookies” as the researchers called them, are variations of the same vocalization. Sometimes the great ape would say high-pitched “wookies” and sometimes he would say his “Ahs” in a lower pitch. The researchers note that the sounds are specific to Rocky and ones that he used everyday. No other orangutan, captive or wild, made these noises. Rocky, who had never lived in the rain forest, apparently learned the skill during his time as an entertainment orangutan before coming to the zoo. He was at one point the most seen orangutan in movies and commercials, according to the zoo. The researchers said that Rocky’s grunts show that great apes have the capacity to learn to control their muscles to deliberately alter their sounds in a “conversational” manner. The findings, which were published in the journal Scientific Reports, challenge the notion that orangutans — an endangered species that shares about 97 percent of it DNA with humans — make noises simply in response to something, sort of like how you might scream when you place your hand on a hot stove. © 2016 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22495 - Posted: 07.30.2016

An orangutan copying sounds made by researchers offers new clues to how human speech evolved, scientists say. Rocky mimicked more than 500 vowel-like noises, suggesting an ability to control his voice and make new sounds. It had been thought these great apes were unable to do this and, since human speech is a learned behaviour, it could not have originated from them. Study lead Dr Adriano Lameira said this "notion" could now be thrown "into the trash can". Dr Lameira, who conducted the research at Amsterdam University prior to joining Durham University, said Rocky's responses had been "extremely accurate". The team wanted to make sure the ape produced a new call, rather than adapting a "normal orangutan call with a personal twist" or matching sounds randomly or by coincidence, he said. The new evidence sets the "start line for scientific inquiry at a higher level", he said. "Ultimately, we should be now in a better position to think of how the different pieces of the puzzle of speech evolution fit together." The calls Rocky made were different from those collected in a large database of recordings, showing he was able to learn and produce new sounds rather than just match those already in his "vocabulary". In a previous study Dr Lameira found a female orangutan at Cologne Zoo in Germany was able to make sounds with a similar pace and rhythm to human speech. Researchers were "astounded" by Tilda's vocal skills but could not prove they had been learned, he said. However, the fact that "other orangutans seem to be exhibiting equivalent vocal skills shows that Rocky is not a bizarre or abnormal individual", Dr Lameira said. © 2016 BBC.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22482 - Posted: 07.27.2016

By Anthea Rowan The neurologist does not cushion his words. He tells us how it is: “She won’t read again.” I am standing behind my mother. I feel her stiffen. We do not talk of this revelation for days — and when we do, we do it in the garden of the rehab facility where she is recovering from a stroke. The stroke has scattered her memory, but she has not forgotten she will apparently not read again. I was shocked by what the doctor said, she confides. Me, too. Do you believe him? she asks. No — I am emphatic, for her and for me — I don’t. Mum smiles: “Me neither.” The damage wreaked by Mum’s stroke leaked across her brain, set up roadblocks so that the cerebral circuit board fizzes and pops uselessly, with messages no longer neatly passing from “A” to “B.” I tell the neuro: “I thought they’d learn to go via ‘D’ or ‘W.’ Isn’t that what’s supposed to happen — messages reroute?” “Unlikely,” he responds. “In your mother’s case.” Alexia — the loss of the ability to read — is common after strokes, especially, as in my mother’s case, when damage is wrought in the brain’s occipital lobe, which processes visual information. Pure alexia, which is Mum’s diagnosis, is much more rare: She can still write and touch-type, but bizarrely, she cannot read.

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22392 - Posted: 07.04.2016

By Karin Brulliard Think about how most people talk to babies: Slowly, simply, repetitively, and with an exaggerated tone. It’s one way children learn the uses and meanings of language. Now scientists have found that some adult birds do that when singing to chicks — and it helps the baby birds better learn their song. The subjects of the new study, published last week in the journal Proceedings of the National Academy of Sciences, were zebra finches. They’re good for this because they breed well in a lab environment, and “they’re just really great singers. They sing all the time,” said McGill University biologist and co-author Jon Sakata. The males, he means — they’re the singers, and they do it for fun and when courting ladies, as well as around baby birds. Never mind that their melody is more “tinny,” according to Sakata, than pretty. Birds in general are helpful for vocal acquisition studies because they, like humans, are among the few species that actually have to learn how to make their sounds, Sakata said. Cats, for example, are born knowing how to meow. But just as people pick up speech and bats learn their calls, birds also have to figure out how to sing their special songs. Sakata and his colleagues were interested in how social interactions between adult zebra finches and chicks influences that learning process. Is face-to-face — or, as it may be, beak-to-beak — learning better? Does simply hearing an adult sing work as well as watching it do so? Do daydreaming baby birds learn as well as their more focused peers? © 1996-2016 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22286 - Posted: 06.06.2016

By RUSSELL GOLDMAN There’s an elephant at a zoo outside Seoul that speaks Korean. — You mean, it understands some Korean commands, the way a dog can be trained to understand “sit” or “stay”? No, I mean it can actually say Korean words out loud. — Pics or it didn’t happen. Here, watch the video. To be fair, the elephant, a 26-year-old Asian male named Koshik, doesn’t really speak Korean, any more than a parrot can speak Korean (or English or Klingon). But parrots are supposed to, well, parrot — and elephants are not. And Koshik knows how to say at least five Korean words, which are about five more than I do. The really amazing part is how he does it. Koshik places his trunk inside his mouth and uses it to modulate the tone and pitch of the sounds his voice makes, a bit like a person putting his fingers in his mouth to whistle. In this way, Koshik is able to emulate human speech “in such detail that Korean native speakers can readily understand and transcribe the imitations,” according to the journal Current Biology. What’s in his vocabulary? Things he hears all the time from his keepers: the Korean words for hello, sit down, lie down, good and no. Elephant Speaks Korean | Video Video by LiveScienceVideos Lest you think this is just another circus trick that any Jumbo, Dumbo or Babar could pull off, the team of international scientists who wrote the journal article say Koshik’s skills represent “a wholly novel method of vocal production and formant control in this or any other species.” Like many innovations, Koshik’s may have been born of sad necessity. Researchers say he started to imitate his keepers’s sounds only after he was separated from other elephants at the age of 5 — and that his desire to speak like a human arose from sheer loneliness. © 2016 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22253 - Posted: 05.26.2016

By BENEDICT CAREY Listening to music may make the daily commute tolerable, but streaming a story through the headphones can make it disappear. You were home; now you’re at your desk: What happened? Storytelling happened, and now scientists have mapped the experience of listening to podcasts, specifically “The Moth Radio Hour,” using a scanner to track brain activity. In a paper published Wednesday by the journal Nature, a research team from the University of California, Berkeley, laid out a detailed map of the brain as it absorbed and responded to a story. Widely dispersed sensory, emotional and memory networks were humming, across both hemispheres of the brain; no story was “contained” in any one part of the brain, as some textbooks have suggested. The team, led by Alexander Huth, a postdoctoral researcher in neuroscience, and Jack Gallant, a professor of psychology, had seven volunteers listen to episodes of “The Moth” — first-person stories of love, loss, betrayal, flight from an abusive husband, and more — while recording brain activity with an M.R.I. machine. Sign Up for the Science Times Newsletter Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos. Using novel computational methods, the group broke down the stories into units of meaning: social elements, for example, like friends and parties, as well as locations and emotions . They found that these concepts fell into 12 categories that tended to cause activation in the same parts of people’s brains at the same points throughout the stories. They then retested that model by seeing how it predicted M.R.I. activity while the volunteers listened to another Moth story. Would related words like mother and father, or times, dates and numbers trigger the same parts of people’s brains? The answer was yes. © 2016 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 22162 - Posted: 04.30.2016

By Andy Coghlan “I’ve become resigned to speaking like this,” he says. The 17-year old boy’s mother tongue is Dutch, but for his whole life he has spoken with what sounds like a French accent. “This is who I am and it’s part of my personality,” says the boy, who lives in Belgium – where Dutch is an official language – and prefers to remain anonymous. “It has made me stand out as a person.” No matter how hard he tries, his speech sounds French. About 140 cases of foreign accent syndrome (FAS) have been described in scientific studies, but most of these people developed the condition after having a stroke. In the UK, for example, a woman in Newcastle who’d had a stroke in 2006 woke up with a Jamaican accent. Other British cases include a woman who developed a Chinese accent, and another who acquired a pronounced French-like accent overnight following a bout of cerebral vasculitis. But the teenager has had the condition from birth, sparking the interest of Jo Verhoeven of City University London and his team. Scans revealed that, compared with controls, the flow of blood to two parts of the boy’s brain were significantly reduced. One of these was the prefrontal cortex of the left hemisphere – a finding unsurprising to the team, as it is known to be associated with planning actions including speech. © Copyright Reed Business Information Ltd.

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22161 - Posted: 04.30.2016