Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 482

by Robert Krulwich Here's something you should know about yourself. Vowels control your brain. "I"s make you see things differently than "O"s. Here's how. Say these words out loud: Bean Mint Slim These "I" and "E" vowels are formed by putting your tongue forward in the mouth. That's why they're called "front" vowels. Now, say: Large Pod Or Ought With these words, your tongue depresses and folds back a bit. So "O", "A" and "U" are called "back" of the throat vowels. OK, here's the weird part. When comparing words across language groups, says Stanford linguistics professor Dan Jurafsky, a curious pattern shows up: Words with front vowels ("I" and "E") tend to represent small, thin, light things. Back vowels ("O" "U" and some "A"s ) show up in fat, heavy things. It's not always true, but it's a tendency that you can see in any of the stressed vowels in words like little, teeny or itsy-bitsy (all front vowels) versus humongous or gargantuan (back vowels). Or the i vowel in Spanish chico (front vowel meaning small) versus gordo (back vowel meaning fat). Or French petit (front vowel) versus grand (back vowel). Copyright 2011 NPR

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16126 - Posted: 12.10.2011

by Traci Watson Parrots have neither lips nor teeth, but that doesn't stop them from producing dead-on imitations of human speech. Now researchers have learned part of the reason: like humans, parrots use their tongues to form sounds. As they report today in The Journal of Experimental Biology, scientists took x-ray movies of monk parakeets, Myiopsitta monachus, South American natives that can be trained to speak but aren't star talkers. The parakeets lowered their tongues during loud, squeaky "contact calls" made when the birds can't see each other and during longer, trilling "greeting calls" made to show a social connection. As seen in the video, the parakeets also moved their tongues up and down while chattering. No other type of bird is known to move its tongue to vocalize. Parrots use their mobile, muscular tongues to explore their environment and manipulate food. Those capable organs, just by coincidence, also help parrots utter greetings in words that even humans can understand. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16125 - Posted: 12.08.2011

by Catherine de Lange Chimpanzee brains may be hard-wired to evolve language even though they can't talk. That is the suggestion of a study which found chimps link sounds and levels of brightness, something akin to synaesthesia in people. Such an association could help explain how our early ancestors took the first vital step from ape-like grunts to a proper vocabulary. Synaesthetes make unusual connections between different senses – they might sense certain tastes when they hear music, or "see" numbers as colours. This is less unusual than you might think: "The synaesthetic experience is a continuum," explains Roi Cohen Kadosh of University College London. "Most people have it at an implicit level, and some people have a stronger connection." Now, Vera Ludwig from the Charite University of Medicine in Berlin, Germany, and colleagues have shown for the first time that chimpanzees also make cross-sensory associations, suggesting they evolved early on. The team repeatedly flashed either black or white squares for 200 milliseconds at a time on screens in front of six chimpanzees (Pan troglodytes) and 33 humans. The subjects had to indicate whether the square was black or white by touching a button of the right colour. A high or low-pitched sound was randomly played in the background during each test. Chimps and humans were better at identifying white squares when they heard a high-pitched sound, and more likely to correctly identify dark squares when played a low-pitched sound. But performance was poor when the sounds were swapped: humans were slower to identify a white square paired with a low-pitched noise, or a black square with a high-pitched noise, and the chimps' responses became significantly less accurate. © Copyright Reed Bus

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16114 - Posted: 12.06.2011

By Charles Q. Choi Ravens use their beaks and wings much like humans rely on our hands to make gestures, such as for pointing to an object, scientists now find. This is the first time researchers have seen gestures used in this way in the wild by animals other than primates. From the age of 9 to 12 months, human infants often use gestures to direct the attention of adults to objects, or to hold up items so that others can take them. These gestures, produced before children speak their first words, are seen as milestones in the development of human speech. Dogs and other animals are known to point out items using gestures, but humans trained these animals, and scientists had suggested the natural development of these gestures was normally confined only to primates, said researcher Simone Pika, a biologist at the Max Planck Institute for Ornithology in Seewiesen, Germany. Even then, comparable gestures are rarely seen in the wild in our closest living relatives, the great apes—for instance, chimpanzees in the Kibale National Park in Uganda employ so-called directed scratches to indicate distinct spots on their bodies they want groomed. Still, ravens and their relatives such as crows and magpies have been found to be remarkably intelligent over the years, surpassing most other birds in terms of smarts and even rivaling great apes on some tests. © 2011 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16098 - Posted: 12.01.2011

by Marion Long and Valerie Ross For centuries experts held that every language is unique. Then one day in 1956, a young linguistics professor gave a legendary presentation at the Symposium on Information Theory at MIT. He argued that every intelligible sentence conforms not only to the rules of its particular language but to a universal grammar that encompasses all languages. And rather than absorbing language from the environment and learning to communicate by imitation, children are born with the innate capacity to master language, a power imbued in our species by evolution itself. Almost overnight, linguists’ thinking began to shift. Avram Noam Chomsky was born in Philadelphia on December 7, 1928, to William Chomsky, a Hebrew scholar, and Elsie Simonofsky Chomsky, also a scholar and an author of children’s books. While still a youngster, Noam read his father’s manuscript on medieval Hebrew grammar, setting the stage for his work to come. By 1955 he was teaching linguistics at MIT, where he formulated his groundbreaking theories. Today Chomsky continues to challenge the way we perceive ourselves. Language is “the core of our being,” he says. “We are always immersed in it. It takes a strong act of will to try not to talk to yourself when you’re walking down the street, because it’s just always going on.” Chomsky also bucked against scientific tradition by becoming active in politics. He was an outspoken critic of American involvement in Vietnam and helped organize the famous 1967 protest march on the Pentagon. When the leaders of the march were arrested, he found himself sharing a cell with Norman Mailer, who described him in his book Armies of the Night as “a slim, sharp-featured man with an ascetic expression, and an air of gentle but absolute moral integrity.” © 2011, Kalmbach Publishing Co.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16094 - Posted: 12.01.2011

Ewen Callaway A mutation that appeared more than half a million years ago may have helped humans learn the complex muscle movements that are critical to speech and language. The claim stems from the finding that mice genetically engineered to produce the human form of the gene, called FOXP2, learn more quickly than their normal counterparts. The work was presented by Christiane Schreiweis, a neuroscientist at the Max Planck Institute (MPI) for Evolutionary Anthropology in Leipzig, Germany, at the Society for Neuroscience meeting this week in Washington DC this week. Scientists discovered FOXP2 in the 1990s by studying a British family known as 'KE' in which three generations suffered from severe speech and language problems1. Those with language problems were found to share an inherited mutation that inactivates one copy of FOXP2. Most vertebrates have nearly identical versions of the gene, which is involved in the development of brain circuits important for the learning of movement. The human version of FOXP2, the protein encoded by the gene, differs from that of chimpanzees at two amino acids, hinting that changes to the human form may have had a hand in the evolution of language2. A team led by Schreiweis’ colleague Svante Pääbo discovered that the gene is identical in modern humans (Homo sapiens) and Neanderthals (Homo neanderthalensis), suggesting that the mutation appeared before these two human lineages diverged around 500,000 years ago3. © 2011 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 16058 - Posted: 11.19.2011

by Nora Schultz Actions speak louder than words. Baby chimps, bonobos, gorillas and orang-utans – our four closest living relatives – quickly learn to use visual gestures to get their message across, providing the latest evidence that hand waving may have been a vital first step in the development of human language. After a long search for the origins of language in animal vocalisations, some evolutionary biologists have begun to change tack. The emerging "gesture theory" of language evolution has it that our ancestors' linguistic abilities may have begun with their hands rather than their vocal cords. Katja Liebal and colleagues at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, have found new evidence for the theory by studying how communication develops in our closest living relatives. They discovered that all four great apes – chimps, bonobos, gorillas and orang-utans – develop a complex repertoire of gestures during the first 20 months of life. Look at me Those gestures included the tactile pokes and nudges that are expected to effectively capture another's attention in any situation, but they also included visual gestures such as extending the arms towards another ape or head shaking. To be effective communication tools, these visual gestures require that a young ape be aware that another individual is paying attention before using them, if they want to get their message across. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16016 - Posted: 11.11.2011

By RITCHIE S. KING As plain-tailed wrens dart through Chusquea bamboo in the Andes, they can be heard singing a kind of song that no other bird is known to sing: a cooperative duet. New research shows that male-female pairs take turns producing notes, at a combined rate of three to six per second, to create what sounds like a single bird’s song. Each member of the duo reacts to what the other one does, adjusting the timing and pitch as needed to maintain the melody the two are trying to play together. The duet is like humans dancing, said Eric Fortune, a neuroscientist at Johns Hopkins University and an author of the study, which appeared in the journal Science. The cues between the birds are “continuous and subtle,” and brain scans show that each bird learns the entire duet — as a pair of ballroom dancers learns choreography — instead of only memorizing its individual part. In the world of plain-tailed wrens, it appears that females always lead, singing a simple backbone melody that the males fill in with something more variable, like a guitar solo. The research team suspects that a female engages in cooperative singing to put a male’s chirping prowess to the test and thereby determine his suitability as a mate. While alone, a female wren practices her section of a duet at full volume. But males make more mistakes during cooperative singing, so they tweet much more timidly when they rehearse their part. © 2011 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 16007 - Posted: 11.08.2011

by Jennifer Barone Language seems to set humans apart from other animals, but scientists cannot just hand monkeys and birds an interspecies SAT to determine which linguistic abilities are singularly those of Homo sapiens and which we share with other animals. In August neuroscientists Kentaro Abe and Dai Watanabe of Kyoto University announced that they had devised the next-best thing, a systematic test of birds’ grammatical prowess. The results suggest that Bengalese finches have strict rules of syntax: The order of their chirps matters. “It’s the first experiment to show that any animal has perceived the especially complex patterns that supposedly make human language unique,” says Timothy Gentner, who studies animal cognition and communication at the University of California, San Diego, and was not involved in the study. Finches cry out whenever they hear a new tune, so Abe and Watanabe started by having individual birds listen to an unfamiliar finch’s song. At first the listeners called out in reply, but after 200 playbacks, their responses died down. Then the researchers created three remixes by changing the order of the song’s component syllables. The birds reacted indifferently to two of the revised tunes; apparently the gist of the message remained the same. But one remix elicited a burst of calls, as if the birds had detected something wrong. Abe and Watanabe concluded that the birds were reacting like grumpy middle-school English teachers to a violation of their rules of syntax. © 2011, Kalmbach Publishing C

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 16003 - Posted: 11.08.2011

By Bruce Bower Talk is cheap, but scientific value lurks in all that gab. Words cascading out of countless flapping gums contain secrets about the evolution of language that a new breed of researchers plan to expose with statistical tools borrowed from genetics. For more than a century, traditional linguists have spent much of their time doing fieldwork — listening to native speakers to pick up on words with similar sounds, such as mother in English and madre in Spanish, and comparing how various tongues arrange subjects, verbs, objects and other grammatical elements into sentences. Such information has allowed investigators to group related languages into families and reconstruct ancestral forms of talk. But linguists generally agree that their methods can revive languages from no more than 10,000 years ago. Borrowing of words and grammar by speakers of neighboring languages, the researchers say, erases evolutionary signals from before that time. Now a small contingent of researchers, many of them evolutionary biologists who typically have nothing to do with linguistics, are looking at language from in front of their computers, using mathematical techniques imported from the study of DNA to wring scenarios of language evolution out of huge amounts of comparative speech data. These data analyzers assume that words and other language units change systematically as they are passed from one generation to the next, much the way genes do. Charles Darwin similarly argued in 1871 that languages, like biological species, have evolved into a series of related forms. © Society for Science & the Public 2000 - 2011

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15990 - Posted: 11.05.2011

Duet-singing birds in South America's Andes mountains are helping scientists understand how the brain co-ordinates itself to co-operate with other individuals. The plain-tailed wren, a small brown and grey bird that lives in bamboo thickets in cloud forests mainly in Ecuador, has impressed scientists ever since they realized that the bird's rapid-fire tweeting song is actually a duet, where the male and female alternate notes. "It was absolutely amazing to us the first time we heard it," said Eric Fortune, a behavioural neuroscientist at Johns Hopkins University in Baltimore, Md., who wanted to uncover how the birds managed the feat. Fortune and his colleagues flew to Ecuador and trekked more than two hours to the Antisana Volcano to find the plain-tailed wrens and record their songs. "They are extremely loud singers," he told CBC's Quirks & Quarks in an interview set to air Saturday. "If you're near them, it's almost an unpleasant experience." Eric Fortune sits in front of a system designed to record signals from the brains of plain-tailed wrens at the Yanayacu Biological Research Station and Center for Creative Studies in Ecuador as it was being built. Eric Fortune sits in front of a system designed to record signals from the brains of plain-tailed wrens at the Yanayacu Biological Research Station and Center for Creative Studies in Ecuador as it was being built. Courtesy of Eric Fortune and Melissa Coleman Fortune played a recording of the duet, which initially sounds like a single bird twittering. Then he played the songs of the female alone, a sparser series of notes that leaves gaps for her male partner to insert his part. © CBC 2011

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 15987 - Posted: 11.05.2011

By Jennifer Viegas A 25-year-old chimpanzee named "Panzee" has just demonstrated that speech perception is not a uniquely human trait. Well-educated Panzee understands more than 130 English language words and even recognizes words in sine-wave form, a type of synthetic speech that reduces language to three whistle-like tones. This shows that she isn't just responding to a particular person's voice or emotions, but instead she is processing and perceiving speech as humans do. "The results suggest that the common ancestor of chimpanzees and humans may have had the capability to perceive speech-like sounds before the evolution of speech, and that early humans were taking advantage of this latent ability when speech did eventually emerge," said Lisa Heimbauer who presented a talk today on the chimp at the 162nd Meeting of the Acoustical Society of America in San Diego. Heimbauer, a doctoral candidate and researcher at Georgia State University's Language Research Center, and colleagues Michael Owren and Michael Beran tested Panzee on her ability to understand words communicated via sine-wave speech, which replicates the estimated frequency and amplitude patterns of natural utterances. "Tickle," "M&M," "lemonade," and "sparkler" were just a few of the test words. Even when the words were stripped of the acoustic constituents of natural speech, Panzee knew what they meant, correctly matching them to corresponding photos. The findings refute what is known as the "Speech is Special" theory. © 2011 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15970 - Posted: 11.01.2011

By Danielle Perszyk Are humans the only species with enough smarts to craft a language? Most of us believe that we are. Although many animals have their own form of communication, none has the depth or versatility heard in human speech. We are able to express almost anything on our mind by uttering a few sounds in a particular order. Human language has a flexibility and complexity that seems to be universally shared across cultures and, in turn, contributes to the variation and richness we find among human cultures. But are the rules of grammar unique to human language? Perhaps not, according to a recent study, which showed that songbirds may also communicate using a sophisticated grammar—a feature absent in even our closest relatives, the nonhuman primates. Kentaro Abe and Dai Watanabe of Kyoto University performed a series of experiments to determine whether Bengalese finches expect the notes of their tunes to follow a certain order. To test this possibility, Abe and Watanabe took advantage of a behavioral response called habituation, where animals zone-out when exposed to the same stimulus over and over again. In each experiment, the birds were presented with the same songs until they became familiarized with the tune. The researchers then created novel songs by shuffling the notes around. But not every new song caught the birds’ attention; rather, the finches increased response calls only to songs with notes arranged in a particular order, suggesting that the birds used common rules when forming the syntax of that song. When the researchers created novel songs with even more complicated artificial grammar—for example, songs that mimicked a specific feature found in human (Japanese) language—the birds still only responded to songs that followed the rules. © 2011 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15965 - Posted: 10.29.2011

by Carl Zimmer It is a shame that grammar leaves no fossils behind. Few things have been more important to our evolutionary history than language. Because our ancestors could talk to each other, they became a powerfully cooperative species. In modern society we are so submerged in words—spoken, written, signed, and texted—that they seem inseparable from human identity. And yet we cannot excavate some fossil from an Ethiopian hillside, point to a bone, and declare, “This is where language began.” Lacking hard evidence, scholars of the past speculated broadly about the origin of language. Some claimed that it started out as cries of pain, which gradually crystallized into distinct words. Others traced it back to music, to the imitation of animal grunts, or to birdsong. In 1866 the Linguistic Society of Paris got so exasperated by these unmoored musings that it banned all communication on the origin of language. Its English counterpart felt the same way. In 1873 the president of the Philological Society of London declared that linguists “shall do more by tracing the historical growth of one single work-a-day tongue, than by filling wastepaper baskets with reams of paper covered with speculations on the origin of all tongues.” A century passed before linguists had a serious change of heart. The change came as they began to look at the deep structure of language itself. MIT linguist Noam Chomsky asserted that the way children acquire language is so effortless that it must have a biological foundation. Building on this idea, some of his colleagues argued that language is an adaptation shaped by natural selection, just like eyes and wings. If so, it should be possible to find clues about how human language evolved from grunts or gestures by observing the communication of our close primate relatives. © 2011, Kalmbach Publishing Co.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15920 - Posted: 10.18.2011

By PERRI KLASS, M.D. Once, experts feared that young children exposed to more than one language would suffer “language confusion,” which might delay their speech development. Today, parents often are urged to capitalize on that early knack for acquiring language. Upscale schools market themselves with promises of deep immersion in Spanish — or Mandarin — for everyone, starting in kindergarten or even before. Yet while many parents recognize the utility of a second language, families bringing up children in non-English-speaking households, or trying to juggle two languages at home, are often desperate for information. And while the study of bilingual development has refuted those early fears about confusion and delay, there aren’t many research-based guidelines about the very early years and the best strategies for producing a happily bilingual child. But there is more and more research to draw on, reaching back to infancy and even to the womb. As the relatively new science of bilingualism pushes back to the origins of speech and language, scientists are teasing out the earliest differences between brains exposed to one language and brains exposed to two. Researchers have found ways to analyze infant behavior — where babies turn their gazes, how long they pay attention — to help figure out infant perceptions of sounds and words and languages, of what is familiar and what is unfamiliar to them. Now, analyzing the neurologic activity of babies’ brains as they hear language, and then comparing those early responses with the words that those children learn as they get older, is helping explain not just how the early brain listens to language, but how listening shapes the early brain. © 2011 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 15902 - Posted: 10.11.2011

Catherine de Lange, contributor As curator Alex Julyan stepped up to the podium to introduce the speakers for the first talk of the evening, she looked composed and at ease. But as she opened her mouth to make the introductions, some of the words came out slightly muddled. It was a tiny mistake, one that most people in the audience are unlikely to have noticed. To those who did, however, her words had betrayed her ever so slightly, suggesting she was perhaps more nervous than she seemed. However composed she looked on the outside, her mouth told a different story. Poignantly, this was the very subject of the talk that Julyan was introducing. "When we speak, we tend to think about the words we say telling people what we want them to know," said Sophie Scott, a neuroscientist from University College London who studies the perception of speech, as she began her talk, "but, of course, when we speak, we unavoidably and continuously tell people about who we are, the origins of where we come from, our aspirations, who we would like to sound like, our emotions, our age and our health." Scott was speaking at a special one-off event put on by the Wellcome Collection in London. She kicked off the celebration of our oral orifice, titled Get Mouthy, with the help of actor Julian Rhind Tutt, who she has been experimenting on using an MRI scanner. From these scans, which she displayed for the audience, one of the most obvious things we could see is that Julian has a big head. That, Scott told us, is no coincidence. Most actors and singers have big heads - and that's not a reference to ego size. She explained that a larger noggin means performers have more room in their mouths to generate accurate sounds. "Amy Winehouse had a massive face," she pointed out. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 15877 - Posted: 10.04.2011

by Michael Marshall IT SMELLS, it buzzes, it even dances like a honeybee. In a field in Germany, RoboBee is making its first attempts at speaking to the insects in their own language. Bees are famous for communicating using the waggle dance - walking forward while rapidly vibrating their rear. In the 1940s, biologist Karl von Frisch realised that the length and angle of the dance correlated with the distance and direction of the food source the bee had just visited. Since then, most apiologists have held that dancers tell their fellows where to find foodMovie Camera (New Scientist, 19 September 2009, p 40)Movie Camera. Now Tim Landgraf of the Free University of Berlin in Germany and colleagues have programmed their foam RoboBee, to mimic the dance. RoboBee is stuck to the end of a rod attached to a computer, which determines its "dance" moves. The rod is also connected to a belt which makes it vibrate. Like a real bee, it can spin, buzz its wings, carry scents and droplets of sugar water, and give off heat. To program RoboBee, Landgraf took high-speed video of 108 real waggle dances, and put the footage through software that analysed the dances in detail (PLoS One, DOI: 10.1371/journal.pone.0021354). The outcome is "the most detailed description so far of the waggle dance", says Christoph Grüter of the University of Sussex in Brighton, UK, who was not involved in the study. What do real bees think of RoboBee's skills? In a field outside Berlin, Landgraf trained groups of honeybees to use a feeder, which he then closed. The bees stopped foraging and stayed in their hives. There they met RoboBee, which had been programmed with Landgraf's best guess at a waggle dance pointing to another feeder, which the bees had never visited. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15698 - Posted: 08.20.2011

by David Robson Nearly 100 years of linguistics research has been based on the assumption that words are just collections of sounds - an agreed acoustic representation that has little to do with their actual meaning. There should be nothing in nonsense words such as "Humpty Dumpty" that would give away the character's egg-like figure, any more than someone with no knowledge of English could be expected to infer that the word "rose" represents a sweet-smelling flower. Yet a spate of recent studies challenge this idea. They suggest that we seem instinctively to link certain sounds with particular sensory perceptions. Some words really do evoke Humpty's "handsome" rotundity. Others might bring to mind a spiky appearance, a bitter taste, or a sense of swift movement. And when you know where to look, these patterns crop up surprisingly often, allowing a monoglot English speaker to understand more Swahili or Japanese than you might imagine (see "Which sounds bigger?" at the bottom of this article). These cross-sensory connections may even open a window onto the first words ever uttered by our ancestors, giving us a glimpse of the earliest language and how it emerged. More than 2000 years before Carroll suggested words might have some inherent meaning, Plato recorded a dialogue between two of Socrates's friends, Cratylus and Hermogenes. Hermogenes argued that language is arbitrary and the words people use are purely a matter of convention. Cratylus, like Humpty Dumpty, believed words inherently reflect their meaning - although he seems to have found his insights into language disillusioning: Aristotle says Cratylus eventually became so disenchanted that he gave up speaking entirely. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15577 - Posted: 07.19.2011

by Virginia Morell Parrots are talkative birds, with impressive, humanlike linguistic abilities. Also like us, male and female parrots are lifelong vocal learners. Because of these similarities, researchers have long wondered whether parrot chicks learn their first calls or if these sounds are innate. For the first time, scientists have succeeded in studying the calls of parrot chicks in the wild. They find that the birds do learn their first calls—and from their parents, much as human infants do. The findings suggest that parrots may be better than songbirds as models for studying how humans acquire speech. Like other parrot species, green-rumped parrotlets (Forpus passerinus), a parakeet-sized species that lives in South America, make what scientists term a signature contact call, a sound that functions much like a name. The birds use it to find and recognize mates and identify their chicks. Other studies have shown that wild parrots often imitate one another's contact calls—rather like someone calling out the name of a friend. "One study of another species of captive parrotlets suggested that individual birds are assigned names by their family members," says ornithologist Karl Berg of Cornell University. But he wanted to know whether the birds do this in the wild, too, and why. Captive studies cannot, by themselves, explain what function such behaviors serve in nature or how they evolved. But studying parrots in the wild is "extremely difficult" because they generally nest in hollows high in trees, says behavioral ecologist Timothy Wright of New Mexico State University in Las Cruces, an expert on wild parrots. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 15550 - Posted: 07.14.2011

By Bruce Bower Panzee doesn’t talk, but she knows a word when she hears one — even if it’s emitted by a computer with a synthetic speech impediment. That’s not too shabby for a chimpanzee. Raised to recognize 128 spoken words by pointing to corresponding symbols, Panzee perceives acoustically distorted words about as well as people do, say psychology graduate student Lisa Heimbauer of Georgia State University in Atlanta and her colleagues. Panzee thus challenges the argument that only people can recognize highly distorted words, thanks to brains tuned to speech sounds and steeped in chatter, the scientists contend in a paper published online June 30 in Current Biology. “Auditory processing abilities that already existed in a common ancestor of chimpanzees and humans may have been sufficient to perceive speech,” Heimbauer says. Panzee’s immersion in talk began in infancy and fueled her word-detection skills, much as occurs in people, Heimbauer suggests. Originally, the researchers thought that Panzee would need training to grasp the word task, since she had never heard artificially distorted words. But after hearing only one such word, the chimp identified the next four synthetically distorted words before making a mistake. “What were supposed to be training sessions became test sessions,” Heimbauer says. © Society for Science & the Public 2000 - 2011

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 15542 - Posted: 07.09.2011