Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 603

By Rachel Hoge I was waiting in line at my bank’s drive-up service, hoping to make a quick withdrawal. I debated my options: two vacant service lines and one busy one for the ATM. The decision was easy: Wait in the line and deal with a machine. I have a speech disability — a stutter — and interactions with strangers have the potential to be, at the very least, extremely awkward; at worst, I have been mocked, insulted, misjudged or refused service. I avoid interacting with new people, fearful of their judgment. Using the ATM offered me more than just convenience. But the ATM, I soon discovered, was going down for maintenance. I could either leave, returning on a day when the machine was back in service, or speak with a bank teller. Once again, I debated my options. I needed the cash and I was feeling optimistic, so I pulled into the service line. I quickly rehearsed all acceptable variations of what I had to say: I need to withdraw some money from my checking account. Or maybe, to use fewer words: Could I have a withdrawal slip? Or straight to the point: Withdraw, please. Rachel Hoge would rather be treated with patience than with pity. (Katy Nash) I pulled my car forward. Glancing at the teller, I took a deep breath and managed to blurt out: “Can I ppppplease make a wi-wi-with-with-withdrawal?” The teller smiled on the other side of the glass. “Sure,” she said. I wasn’t sure if she had noticed my stutter or simply believed my repetitions (rep-rep-repetitions) and prolongations (ppppprolongations) were just indications of being tongue-tied rather than manifestations of a persistent stutter. I eased back in my seat, trying to relax. © 1996-2017 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24296 - Posted: 11.06.2017

/ By Elizabeth Svoboda When Gerald Shea was 6 years old, a bout of scarlet fever left him partially deaf, though he was not formally diagnosed until turning 34. His disability left him in a liminal space between silence and sound; he grew used to the fuzzed edges of words, the strain of parsing a language that no longer felt fully native. Years ago, he began combing historical records on deafness to lend context to his own experience. But his research turned up something unexpected: a centuries-long procession of leaders and educators who stifled the deaf by forcing them to conform to the ways of the hearing. That is the driving impetus behind “The Language of Light,” Shea’s history of deaf people’s ongoing quest to learn and communicate in signed languages. “Theirs is not an unplanned but a natural, visual poetry, at once both the speech and the music of the Deaf,” he writes. (He capitalizes the word to refer to people who consider themselves part of the deaf culture and community.) In conveying the unique cadence of this silent music — its intricate grammatical structure, its power to express an infinite array of ideas — Shea underscores the tragedy of its suppression. From the outset, he confronts us with a rogues’ gallery of those who suppressed it. During the Middle Ages, self-appointed therapists crammed hot coals into the mouths of deaf people, pierced their eardrums, and drilled holes into their skulls, all in a vain effort to force them to speak. In what was at the time Holland, Johann Conrad Amman moved the lips of his deaf charges into the shapes needed to make certain sounds, but the effort was largely fruitless because they could not hear the sounds they were making. The “silent voices” of Shea’s title has a double resonance: Not only did many deaf people remain literally mute from their disability; their potential to express their ideas fully through sign language went untapped. Copyright 2017 Undark

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24283 - Posted: 11.03.2017

By Andy Coghlan How and when did we first become able to speak? A new analysis of our DNA reveals key evolutionary changes that reshaped our faces and larynxes, and which may have set the stage for complex speech. The alterations were not major mutations in our genes. Instead, they were tweaks in the activity of existing genes that we shared with our immediate ancestors. These changes in gene activity seem to have given us flat faces, by retracting the protruding chins of our ape ancestors. They also resculpted the larynx and moved it further down in the throat, allowing our ancestors to make sounds with greater subtleties. The study offers an unprecedented glimpse into how our faces and vocal tracts were altered at the genetic level, paving the way for the sophisticated speech we take for granted. However, other anthropologists say changes in the brain were at least equally important. It is also possible that earlier ancestors could speak, but in a more crude way, and that the facial changes simply took things up a notch. Liran Carmel of the Hebrew University of Jerusalem and his colleagues examined DNA from two modern-day people and four humans who lived within the last 50,000 years. They also looked at extinct hominins: two Neanderthals and a Denisovan. Finally, they looked at genetic material from six chimpanzees and data from public databases supplied by living people. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24004 - Posted: 08.28.2017

Jon Hamilton It's not just what you say that matters. It's how you say it. Take the phrase, "Here's Johnny." When Ed McMahon used it to introduce Johnny Carson on The Tonight Show, the words were an enthusiastic greeting. But in The Shining, Jack Nicholson used the same two words to convey murderous intent. Now scientists are reporting in the journal Science that they have identified specialized brain cells that help us understand what a speaker really means. These cells do this by keeping track of changes in the pitch of the voice. "We found that there were groups of neurons that were specialized and dedicated just for the processing of pitch," says Dr. Eddie Chang, a professor of neurological surgery at the University of California, San Francisco. Chang says these neurons allow the brain to detect "the melody of speech," or intonation, while other specialized brain cells identify vowels and consonants. "Intonation is about how we say things," Chang says. "It's important because we can change the meaning, even — without actually changing the words themselves." For example, by raising the pitch of our voice at the end of a sentence, a statement can become a question. The identification of neurons that detect changes in pitch was largely the work of Claire Tang, a graduate student in Chang's lab and the Science paper's lead author. Tang and a team of researchers studied the brains of 10 epilepsy patients awaiting surgery. The patients had electrodes placed temporarily on the surface of their brains to help surgeons identify the source of their seizures. © 2017 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23996 - Posted: 08.25.2017

Emily Siner The Grand Ole Opry in Nashville, Tenn., is country music's Holy Land. It's home to the weekly radio show that put country music on the national map in 1925. And it's where this summer, 30 people with Williams syndrome eagerly arrived backstage. Williams syndrome is a rare genetic disorder that can cause developmental disabilities. People with the condition are often known for their outgoing personalities and their profound love of music. Scientists are still trying to figure out where this musical affinity comes from and how it can help them overcome their challenges. That's why 12 years ago, researchers at Vanderbilt University set up a summer camp for people with Williams syndrome. For a week every summer, campers come to Nashville to immerse themselves in country music and participate in cutting-edge research. This isn't the only summer camp for people with Williams syndrome, but it is unique in its distinctive country flair. It's organized by the Vanderbilt Kennedy Center, whose faculty and staff focus on developmental disabilities. Eight years ago, the Academy of Country Music's philanthropic arm, ACM Lifting Lives, started funding the program. Campers spend the week meeting musicians and visiting recording studios, even writing an original song. This year, they teamed up with one of country's hottest stars, Dierks Bentley, on that. And they get a backstage tour of the Grand Ole Opry led by Clancey Hopper, who has Williams syndrome herself and attended the Nashville camp for eight years before applying for a job at the Opry. © 2017 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23904 - Posted: 08.01.2017

Ryan Kellman The modern Planet of the Apes reboot begins with a research chimpanzee being raised in an American home. It's a pretty plausible premise — that exact scenario has played out in the real world many times. On June 26, 1931, for example, Luella and Winthrop Kellogg pulled a baby female chimpanzee away from her mother and brought her to live in their home in Orange Park, Fla. The Kelloggs were comparative psychologists. Their plan was to raise the chimpanzee, Gua, alongside their own infant son, Donald, and see if she picked up human language. According to the book they wrote about the experiment, Luella wasn't initially on board: ... the enthusiasm of one of us met with so much resistance from the other that it appeared likely we could never come to an agreement upon whether or not we should even attempt such an undertaking. But attempt it they did. The Kelloggs performed a slew of tests on Donald and Gua. How good were their reflexes? How many words did they recognize? How did they react to the sound of a gunshot? What sound did each infant's skull make when tapped by a spoon? (Donald's produced "a dull thud" while Gua's made the sound of a "mallet upon a wooden croquet ball.") Chimpanzees develop faster than humans, so Gua outshone Donald when it came to most tasks. She even learned to respond to English phrases like "Don't touch!" and "Get down!" But unlike the apes in the movies, Gua never learned to speak. © 2017 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23871 - Posted: 07.25.2017

By Virginia Morell Baby marmosets learn to make their calls by trying to repeat their parents’ vocalizations, scientists report today in Current Biology. Humans were thought to be the only primate with vocal learning—the ability to hear a sound and repeat it, considered essential for speech. When our infants babble, they make apparently random sounds, which adults respond to with words or other sounds; the more this happens, the faster the baby learns to talk. To find out whether marmosets (Callithrix jacchus, pictured) do something similar, scientists played recordings of parental calls during a daily 30-minute session to three sets of newborn marmoset twins until they were 2 months old (roughly equivalent to a 2-year-old human). Baby marmosets make noisy guttural cries; adults respond with soft “phee” contact calls (listen to their calls below). The baby that consistently heard its parents respond to its cries learned to make the adult “phee” sound much faster than did its twin, the team found. It’s not yet known if this ability is limited to the marmosets; if so, the difference may be due to the highly social lives of these animals, where, like us, multiple relatives help care for babies. © 2017 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23662 - Posted: 05.26.2017

Gary Stix Illiterate women in northern Indian learned how to read and write in Hindi for six months after which they had reached a level comparable to a first-grader. Credit: Max Planck Institute for Human Cognitive and Brain Sciences The brain did not evolve to read. It uses the neural muscle of pre-existing visual and language processing areas to enable us to take in works by Tolstoy and Tom Clancy. Reading, of course, begins in the first years of schooling, a time when these brain regions are still in development. What happens, though, when an adult starts learning after the age of 30? A study published May 24 in Science Advances turned up a few unexpected findings. In the report, a broad-ranging group of researchers—from universities in Germany, India and the Netherlands—taught reading to 21 women, all about 30 years of age from near the city of Lucknow in northern India, comparing them to a placebo group of nine women. The majority of those who learned to read could not recognize a word of Hindi at the beginning of the study. After six months, the group had reached a first-grade proficiency level. When the researchers conducted brain scans—using functional magnetic resonance imaging—they were startled. Areas deep below the wrinkled surface, the cortex, in the brains of the new learners had changed. Their results surprised them because most reading-related brain activity was thought to involve the cortex. The new research may overturn this presumption and may pertain pertain to child learners as well. After being filtered through the eyes, visual information may move first to evolutionarily ancient brain regions before being relayed to the visual and language areas of the cortex typically associated with reading. © 2017 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23661 - Posted: 05.25.2017

By FERRIS JABR Con Slobodchikoff and I approached the mountain meadow slowly, obliquely, softening our footfalls and conversing in whispers. It didn’t make much difference. Once we were within 50 feet of the clearing’s edge, the alarm sounded: short, shrill notes in rapid sequence, like rounds of sonic bullets. We had just trespassed on a prairie-dog colony. A North American analogue to Africa’s meerkat, the prairie dog is trepidation incarnate. It lives in subterranean societies of neighboring burrows, surfacing to forage during the day and rarely venturing more than a few hundred feet from the center of town. The moment it detects a hawk, coyote, human or any other threat, it cries out to alert the cohort and takes appropriate evasive action. A prairie dog’s voice has about as much acoustic appeal as a chew toy. French explorers called the rodents petits chiens because they thought they sounded like incessantly yippy versions of their pets back home. On this searing summer morning, Slobodchikoff had taken us to a tract of well-trodden wilderness on the grounds of the Museum of Northern Arizona in Flagstaff. Distressed squeaks flew from the grass, but the vegetation itself remained still; most of the prairie dogs had retreated underground. We continued along a dirt path bisecting the meadow, startling a prairie dog that was peering out of a burrow to our immediate right. It chirped at us a few times, then stared silently. “Hello,” Slobodchikoff said, stooping a bit. A stout bald man with a scraggly white beard and wine-dark lips, Slobodchikoff speaks with a gentler and more lilting voice than you might expect. “Hi, guy. What do you think? Are we worth calling about? Hmm?” Slobodchikoff, an emeritus professor of biology at Northern Arizona University, has been analyzing the sounds of prairie dogs for more than 30 years. Not long after he started, he learned that prairie dogs had distinct alarm calls for different predators. Around the same time, separate researchers found that a few other species had similar vocabularies of danger. What Slobodchikoff claimed to discover in the following decades, however, was extraordinary: Beyond identifying the type of predator, prairie-dog calls also specified its size, shape, color and speed; the animals could even combine the structural elements of their calls in novel ways to describe something they had never seen before. No scientist had ever put forward such a thorough guide to the native tongue of a wild species or discovered one so intricate. Prairie-dog communication is so complex, Slobodchikoff says — so expressive and rich in information — that it constitutes nothing less than language.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23606 - Posted: 05.12.2017

By Jane C. Hu New evidence suggests that the earliest traces of a language can stay with us into adulthood, even if we no longer speak or understand the language itself. And early exposure also seems to speed the process of relearning it later in life. In the new study, recently published in Royal Society Open Science, Dutch adults were trained to listen for sound contrasts in Korean. Some participants reported no prior exposure to the language; others were born in Korea and adopted by Dutch families before the age of six. All participants said they could not speak Korean, but the adoptees from Korea were better at distinguishing between the contrasts and more accurate in pronouncing Korean sounds. “Language learning can be retained subconsciously, even if conscious memories of the language do not exist,” says Jiyoun Choi, postdoctoral fellow at Hanyang University in Seoul and lead author of the study. And it appears that just a brief period of early exposure benefits learning efforts later; when Choi and her collaborators compared the results of people adopted before they were six months old with results of others adopted after 17 months, there were no differences in their hearing or speaking abilities. “It's exciting that these effects are seen even among adults who were exposed to Korean only up to six months of age—an age before which babbling emerges,” says Janet Werker, a professor of psychology at the University of British Columbia, who was not involved with the research. Remarkably, what we learn before we can even speak stays with us for decades. © 2017 Scientific American,

Related chapters from BN8e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23598 - Posted: 05.10.2017

By Mo Costandi The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality. The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately. The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions. © 2017 Scientific American,

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 23558 - Posted: 05.01.2017

By Cormac McCarthy I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.” Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter. A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal. All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems. © 2017 NautilusThink Inc,

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 23525 - Posted: 04.22.2017

by Laura Sanders The way babies learn to speak is nothing short of breathtaking. Their brains are learning the differences between sounds, rehearsing mouth movements and mastering vocabulary by putting words into meaningful context. It’s a lot to fit in between naps and diaper changes. A recent study shows just how durable this early language learning is. Dutch-speaking adults who were adopted from South Korea as preverbal babies held on to latent Korean language skills, researchers report online January 18 in Royal Society Open Science. In the first months of their lives, these people had already laid down the foundation for speaking Korean — a foundation that persisted for decades undetected, only revealing itself later in careful laboratory tests. Researchers tested how well people could learn to identify and speak tricky Korean sounds. “For Korean listeners, these sounds are easy to distinguish, but for second-language learners they are very difficult to master,” says study coauthor Mirjam Broersma, a psycholinguist of Radboud University in Nijmegen, Netherlands. For instance, a native Dutch speaker would listen to three distinct Korean sounds and hear only the same “t” sound. Broersma and her colleagues compared the language-absorbing skills of a group of 29 native Dutch speakers to 29 South Korea-born Dutch speakers. Half of the adoptees moved to the Netherlands when they were older than 17 months — ages at which the kids had probably begun talking. The other half were adopted as preverbal babies younger than 6 months. As a group, the South Korea-born adults outperformed the native-born Dutch adults, more easily learning both to recognize and speak the Korean sounds. |© Society for Science & the Public 2000 - 2017

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23455 - Posted: 04.06.2017

By Matt Reynolds Google’s latest take on machine translation could make it easier for people to communicate with those speaking a different language, by translating speech directly into text in a language they understand. Machine translation of speech normally works by first converting it into text, then translating that into text in another language. But any error in speech recognition will lead to an error in transcription and a mistake in the translation. Researchers at Google Brain, the tech giant’s deep learning research arm, have turned to neural networks to cut out the middle step. By skipping transcription, the approach could potentially allow for more accurate and quicker translations. The team trained its system on hundreds of hours of Spanish audio with corresponding English text. In each case, it used several layers of neural networks – computer systems loosely modelled on the human brain – to match sections of the spoken Spanish with the written translation. To do this, it analysed the waveform of the Spanish audio to learn which parts seemed to correspond with which chunks of written English. When it was then asked to translate, each neural layer used this knowledge to manipulate the audio waveform until it was turned into the corresponding section of written English. “It learns to find patterns of correspondence between the waveforms in the source language and the written text,” says Dzmitry Bahdanau at the University of Montreal in Canada, who wasn’t involved with the work. © Copyright Reed Business Information Ltd.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 23450 - Posted: 04.05.2017

By Timothy Revell Who would you get to observe differences in how men, women and children interact? A robot in a fur-lined hat, of course. Experiments using a robotic head, called Furhat, aimed to uncover inequalities in people’s participation when working on a shared activity, and see if a robot could help redress the balance. They revealed that when a woman is paired in conversation with another woman, she speaks more than if paired with a man. And two men paired together speak less than two women. But this only holds for adults. “Surprisingly, we didn’t find this same pattern for boys and girls. Gender didn’t make much difference to how much children speak,” says Gabriel Skantze at the KTH Royal Institute of Technology in Stockholm, Sweden, who is also one of the robot’s creators. Furhat interacted with 540 visitors at the Swedish National Museum of Science and Technology over nine days. Two people at a time would sit at an interactive table with a touchscreen opposite the robot. They were asked to play a game that involved sorting a set of virtual picture cards, such as arranging images of historical inventions in chronological order. The people worked with the robot to try to solve the task. During this time, the robot’s sensors tracked how long each person spoke for. “This turned out to be a really nice opportunity to study the differences between men and women, and adults and children,” says Skantze. © Copyright Reed Business Information Ltd.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 8: Hormones and Sex
Link ID: 23367 - Posted: 03.17.2017

Jon Hamilton An orangutan named Rocky is helping scientists figure out when early humans might have uttered the first word. Rocky, who is 12 and lives at the Indianapolis Zoo, has shown that he can control his vocal cords much the way people do. He can learn new vocal sounds and even match the pitch of sounds made by a person. "Rocky, and probably other great apes, can do things with their vocal apparatus that, for decades, people have asserted was impossible," says Rob Shumaker, the zoo's director, who has studied orangutans for more than 30 years. Rocky's abilities suggest that our human ancestors could have begun speaking 10 million years ago, about the time humans and great apes diverged, Shumaker says. Until now, many scientists thought that speech required changes in the brain and vocal apparatus that evolved more recently, during the past 2 million years. The vocal abilities of orangutans might have gone undetected had it not been for Rocky, an ape with an unusual past and a rare relationship with people. Rocky was separated from his mother soon after he was born, and spent his early years raised largely by people, and working in show business. "He was certainly the most visible orangutan in entertainment at the time," says Shumaker. "TV commercials, things like that."

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23354 - Posted: 03.14.2017

Bruce Bower The social lives of macaques and baboons play out in what primatologist Julia Fischer calls “a magnificent opera.” When young Barbary macaques reach about 6 months, they fight nightly with their mothers. Young ones want the “maternal embrace” as they snooze; mothers want precious alone time. Getting pushed away and bitten by dear old mom doesn’t deter young macaques. But they’re on their own when a new brother or sister comes along. In Monkeytalk, Fischer describes how the monkey species she studies have evolved their own forms of intelligence and communication. Connections exist between monkey and human minds, but Fischer regards differences among primate species as particularly compelling. She connects lab studies of monkeys and apes to her observations of wild monkeys while mixing in offbeat personal anecdotes of life in the field. Fischer catapulted into a career chasing down monkeys in 1993. While still in college, she monitored captive Barbary macaques. That led to fieldwork among wild macaques in Morocco. In macaque communities, females hold central roles because young males move to other groups to mate. Members of closely related, cooperative female clans gain an edge in competing for status with male newcomers. Still, adult males typically outrank females. Fischer describes how the monkeys strategically alternate between attacking and forging alliances. After forging her own key scientific alliances, Fischer moved on to study baboons in Africa, where she entered the bureaucratic jungle. Obtaining papers for a car in Senegal, for instance, took Fischer several days. She first had to shop for a snazzy outfit to impress male paper-pushers, she says. |© Society for Science & the Public 2000 - 2017.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23315 - Posted: 03.06.2017

By Hanoch Ben-Yami Human intelligence, even in its most basic forms, is expressed in our language, and is also partly dependent on our linguistic capacity. Homer, Darwin and Einstein could obviously not have achieved what they did without language—but neither could a child in kindergarten. And this raises an important question about animal intelligence. Although we don’t expect a chimpanzee to write an epic or a dolphin to develop a scientific theory, it has frequently been asked whether these or other animals are close in intelligence to children in young children. If so, we must wonder whether animals can acquire a language. In the last half century, much effort has been put trying answer that question by teaching animals, primarily apes, a basic language. There have been some limited successes, with animals using signs to obtain things in which they were interested, for instance. But no animal has yet acquired the linguistic capability that children have already in their third year of life. “Why?” This is a question children start asking during by the age of three at the latest. No animal has yet asked anything. “Why?” is a very important question: it shows that those asking it are aware they don’t know something they wish to know. Understanding the why-question is also necessary for the ability to justify our actions and thoughts. The fact that animals don’t ask “why?” shows they don’t aspire to knowledge and are incapable of justification. “No!” © 2017 Scientific American,

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23298 - Posted: 03.01.2017

Scientists who spent years listening to the communication calls of one of our closest ape relatives say their eavesdropping has shed light on the origin of human language. Dr Adriano Reis e Lameira from Durham University recorded and analysed almost 5,000 orangutan "kiss squeaks". He found that the animals combined these purse-lipped, "consonant-like" calls to convey different messages. This could be a glimpse of how our ancestors formed the earliest words. The findings are published in the journal Nature Human Behaviour. "Human language is extraordinarily advanced and complex - we can pretty much transmit any information we want into sound," said Dr Reis e Lameira. "So we tend to think that maybe words evolved from some rudimentary precursor to transmit more complex messages. "We were basically using the orangutan vocal behaviour as a time machine - back to a time when our ancestors were using what would become [those precursors] of consonants and vowels." The team studied kiss squeaks in particular because, like many consonants - the /t/, /p/, /k/ sounds - they depend on the action of the lips, tongue and jaw rather than the voice. "Kiss squeaks do not involve vocal fold action, so they're acoustically and articulatory consonant-like," explained Dr Reis e Lameira. In comparison to research into vowel-like primate calls, the scientists explained, the study of consonants in the evolution of language has been more difficult. But as Prof Serge Wich from Liverpool John Moores University, a lead author in the study, said, they are crucial "building blocks" in the evolution of language. "Most human languages have a lot more consonants than vowels," said Prof Wich. "And if we have more building blocks, we have more combinations." © 2017 BBC.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23204 - Posted: 02.09.2017

Erin Hare One chilly day in February 1877, a British cotton baron named Joseph Sidebotham heard what he thought was a canary warbling near his hotel window. He was vacationing with his family in France, and soon realized the tune wasn’t coming from outside. “The singing was in our salon,” he wrote of the incident in Nature. “The songster was a mouse.” The family fed the creature bits of biscuit, and it quickly became comfortable enough to climb onto the warm hearth at night and regale them with songs. It would sing for hours. Clearly, Sidebotham concluded, this was no ordinary mouse. More than a century later, however, scientists discovered he was wrong. It turns out that all mice chitter away to each other. Their language is usually just too high-pitched for human ears to detect. Today, mouse songs are no mere curiosity. Researchers are able to engineer mice to express genetic mutations associated with human speech disorders, and then measure the changes in the animals’ songs. They’re leveraging these beautifully complex vocalizations to uncover the mysteries of human speech. Anecdotal accounts of singing mice date back to 1843. In the journal The Zoologist, the British entomologist and botanist Edward Newman wrote that the song of a rare “murine Orpheus” sounds as “if the mouth of a canary were carefully closed, and the bird, in revenge, were to turn ventriloquist, and sing in the very centre of his stomach.” © 2017 by The Atlantic Monthly Group.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23162 - Posted: 01.28.2017