Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 536

By Michael Balter Have you ever wondered why you say “The boy is playing Frisbee with his dog” instead of “The boy dog his is Frisbee playing with”? You may be trying to give your brain a break, according to a new study. An analysis of 37 widely varying tongues finds that, despite the apparent great differences among them, they share what might be a universal feature of human language: All of them have evolved to make communication as efficient as possible. Earth is a veritable Tower of Babel: Up to 7000 languages are still spoken across the globe, belonging to roughly 150 language families. And they vary widely in the way they put sentences together. For example, the three major building blocks of a sentence, subject (S), verb (V), and object (O), can come in three different orders. English and French are SVO languages, whereas German and Japanese are SOV languages; a much smaller number, such as Arabic and Hebrew, use the VSO order. (No well-documented languages start sentences or clauses with the object, although some linguists have jokingly suggested that Klingon might do so.) Yet despite these different ways of structuring sentences, previous studies of a limited number of languages have shown that they tend to limit the distance between words that depend on each other for their meaning. Such “dependency” is key if sentences are to make sense. For example, in the sentence “Jane threw out the trash,” the word “Jane” is dependent on “threw”—it modifies the verb by telling us who was doing the throwing, just as we need “trash” to know what was thrown, and “out” to know where the trash went. Although “threw” and “trash” are three words away from each other, we can still understand the sentence easily. © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21263 - Posted: 08.04.2015

By Ariana Eunjung Cha Think you have your hands full making sure your baby is fed and clean and gets enough sleep? Here's another thing for the list: developing your child's social skills by the way you talk. People used to think that social skills were something kids were born with, not taught. But a growing body of research shows that the environment a child grows up in as an infant and toddler can have a major impact on how they interact with others as they get older. And it turns out that a key factor may be the type of language they hear around them, even at an age when all they can do is babble. Psychologists at the University of York observed 40 mothers and their babies at 10, 12, 16 and 20 months and logged the kind of language mothers used during play. They were especially interested in "mind-related comments," which include inferences about what someone is thinking when a behavior or action happens. Elizabeth Kirk, a lecturer at the university who is the lead author of the study, published in the British Journal of Developmental Psychology on Monday, gave this as an example: If an infant has difficulty opening a door on a toy, the parent might comment that the child appears "frustrated." Then researchers revisited the children when they were 5 or 6 years of age and assessed their socio-cognitive ability. The test involved reading a story and having the children answer comprehension questions that show whether they understood the social concept -- persuasion, joke, misunderstanding, white lies, lies, and so forth -- that was represented.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21239 - Posted: 07.30.2015

by Sarah Zielinski It may not be polite to eavesdrop, but sometimes, listening in on others’ conversations can provide valuable information. And in this way, humans are like most other species in the animal world, where eavesdropping is a common way of gathering information about potential dangers. Because alarm calls can vary from species to species, scientists have assumed that eavesdropping on these calls of “danger!” requires some kind of learning. Evidence of that learning has been scant, though. The only study to look at this topic tested five golden-mantled ground squirrels and found that the animals may have learned to recognize previously unknown alarm calls. But the experiment couldn’t rule out other explanations for the squirrels’ behavior, such as that the animals had simply become more wary in general. So Robert Magrath and colleagues at Australian National University in Canberra turned to small Australian birds called superb fairy-wrens. In the wild, these birds will flee to safety when they hear unfamiliar sounds that sound like their own alarm calls, but not when they hear alarm calls that sound different from their own. There’s an exception, though: They’ll take to cover in response to the alarm calls of other species that are common where they live. That suggests the birds learn to recognize those calls. In the lab, the team played the alarm call from a thornbill or a synthetic alarm call for 10 fairy-wrens. The birds didn’t respond to the noise. Then the birds went through two days of training in which the alarm call was played as a mock predator glided overhead. Another group of birds heard the calls but there was no pretend predator. © Society for Science & the Public 2000 - 2015

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21180 - Posted: 07.18.2015

By Lauran Neergaard, New research suggests it may be possible to predict which preschoolers will struggle to read — and it has to do with how the brain deciphers speech when it's noisy. Scientists are looking for ways to tell, as young as possible, when children are at risk for later learning difficulties so they can get early interventions. There are some simple pre-reading assessments for preschoolers. But Northwestern University researchers went further and analyzed brain waves of children as young as three. How well youngsters' brains recognize specific sounds — consonants — amid background noise can help identify who is more likely to have trouble with reading development, the team reported Tuesday in the journal PLOS Biology. If the approach pans out, it may provide "a biological looking glass," said study senior author Nina Kraus, director of Northwestern's Auditory Neuroscience Laboratory. "If you know you have a three-year-old at risk, you can as soon as possible begin to enrich their life in sound so that you don't lose those crucial early developmental years." Connecting sound to meaning is a key foundation for reading. For example, preschoolers who can match sounds to letters earlier go on to read more easily. Auditory processing is part of that pre-reading development: If your brain is slower to distinguish a "D" from a "B" sound, for example, then recognizing words and piecing together sentences could be affected, too. What does noise have to do with it? It stresses the system, as the brain has to tune out competing sounds to selectively focus, in just fractions of milliseconds. And consonants are more vulnerable to noise than vowels, which tend to be louder and longer, Kraus explained. ©2015 CBC/Radio-Canada

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21173 - Posted: 07.15.2015

Henry Nicholls Andy Russell had entered the lecture hall late and stood at the back, listening to the close of a talk by Marta Manser, an evolutionary biologist at the University of Zurich who works on animal communication. Manser was explaining some basic concepts in linguistics to her audience, how humans use meaningless sounds or “phonemes” to generate a vast dictionary of meaningful words. In English, for instance, just 40 different phonemes can be resampled into a rich vocabulary of some 200,000 words. But, explained Manser, this linguistic trick of reorganising the meaningless to create new meaning had not been demonstrated in any non-human animal. This was back in 2012. Russell’s “Holy shit, man” excitement was because he was pretty sure he had evidence for phoneme structuring in the chestnut-crowned babbler, a bird he’s been studying in the semi-arid deserts of south-east Australia for almost a decade. After the talk, Russell (a behavioural ecologist at the University of Exeter) travelled to Zurich to present his evidence to Manser’s colleague Simon Townsend, whose research explores the links between animal communication systems and human language. The fruits of their collaboration are published today in PLoS Biology. One of Russell’s students Jodie Crane had been recording the calls of the chestnut-crowned babbler for her PhD. The PLoS Biology paper focuses on two of these calls, which appear to be made up of two identical elements, just arranged in a different way. © 2015 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21110 - Posted: 06.30.2015

Emma Bowman In a small, sparse makeshift lab, Melissa Malzkuhn practices her range of motion in a black, full-body unitard dotted with light-reflecting nodes. She's strapped on a motion capture, or mocap, suit. Infrared cameras that line the room will capture her movement and translate it into a 3-D character, or avatar, on a computer. But she's not making a Disney animated film. Three-dimensional motion capture has developed quickly in the last few years, most notably as a Hollywood production tool for computer animation in films like Planet of the Apes and Avatar. Behind the scenes though, leaders in the deaf community are taking on the technology to create and improve bilingual learning tools in American Sign Language. Malzkuhn has suited up to record a simple nursery rhyme. Being deaf herself, she spoke with NPR through an interpreter. "I know in English there's just a wealth of nursery rhymes available, but we really don't see as much in ASL," she says. "So we're gonna be doing some original work here in developing nursery rhymes." That's because sound-based rhymes don't cross over well into the visual language of ASL. Malzkuhn heads the Motion Light Lab, or ML2. It's the newest hub of the National Science Foundation Science of Learning Center, Visual Language and Visual Learning (VL2) at Gallaudet University, the premier school for deaf and hard of hearing students. © 2015 NPR

Related chapters from BP7e: Chapter 1: Biological Psychology: Scope and Outlook; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 1: An Introduction to Brain and Behavior; Chapter 15: Language and Our Divided Brain
Link ID: 21107 - Posted: 06.29.2015

By Sarah C. P. Williams Parrots, like the one in the video above, are masters of mimicry, able to repeat hundreds of unique sounds, including human phrases, with uncanny accuracy. Now, scientists say they have pinpointed the neurons that turn these birds into copycats. The discovery could not only illuminate the origins of bird-speak, but might shed light on how new areas of the brain arise during evolution. Parrots, songbirds, and hummingbirds—which can all chirp different dialects, pick up new songs, and mimic sound—all have a “song nuclei” in their brain: a group of interconnected neurons that synchronizes singing and learning. But the exact boundaries of that region are fuzzy; some researchers define it as larger or smaller than others do, depending on what criteria they use to outline the area. And differences between the song nuclei of parrots—which can better imitate complex sounds—and other birds are hard to pinpoint. Neurobiologist Erich Jarvis of Duke University in Durham, North Carolina, was studying the activation of PVALB—a gene that had been previously found in songbirds—within the brains of parrots when he noticed something strange. Stained sections of deceased parrot brains revealed that the gene was turned on at distinct levels within two distinct areas of what he thought was the song nuclei of the birds’ brains. Sometimes, the gene was activated in a spherical central core of the nuclei. But other times, it was only active in an outer shell of cells surrounding that core. When he and collaborators looked more closely, they found that the inner core and the outer shell—like the chocolate and surrounding candy shell of an M&M—varied in many more ways as well.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 21091 - Posted: 06.25.2015

by Meghan Rosen When we brought Baby S home from the hospital six months ago, his big sister, B, was instantly smitten. She leaned her curly head over his car seat, tickled his toes and cooed like a pro — in a voice squeakier than Mickey Mouse’s. B’s voice — already a happy toddler squeal — sounded as if she'd sucked in some helium. My husband and I wondered about her higher pitch. Are humans hardwired to chitchat squeakily to babies, or did B pick up vocal cues from us? (I don’t sound like that, do I?) If I’m like other mothers, I probably do. American English-speaking moms dial up their pitch drastically when talking to their children. But dads’ voices tend to stay steady, researchers reported May 19 in Pittsburgh at the 169th Meeting of the Acoustical Society of America. “Dads talk to kids like they talk to adults,” says study coauthor Mark VanDam, a speech scientist at Washington State University. But that doesn’t mean fathers are doing anything wrong, he says. Rather, they may be doing something right: offering their kids a kind of conversational bridge to the outside world. Scientists have studied infant- or child-directed speech (often called “motherese” or “parentese”) for decades. In American English, this type of babytalk typically uses high pitch, short utterances, repetition, loud volume and slowed-down speech. Mothers who speak German Japanese, French, and other languages also tweak their pitch and pace when talking to children. But no one had really studied dads, VanDam says. © Society for Science & the Public 2000 - 2015.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 21050 - Posted: 06.15.2015

By Jason G. Goldman In 1970 child welfare authorities in Los Angeles discovered that a 14-year-old girl referred to as “Genie” had been living in nearly total social isolation from birth. An unfortunate participant in an unintended experiment, Genie proved interesting to psychologists and linguists, who wondered whether she could still acquire language despite her lack of exposure to it. Genie did help researchers better define the critical period for learning speech—she quickly acquired a vocabulary but did not gain proficiency with grammar—but thankfully, that kind of case study comes along rarely. So scientists have turned to surrogates for isolation experiments. The approach is used extensively with parrots, songbirds and hummingbirds, which, like us, learn how to verbally communicate over time; those abilities are not innate. Studying most vocal-learning mammals—for example, elephants, whales, sea lions—is not practical, so Tel Aviv University zoologists Yosef Prat, Mor Taub and Yossi Yovel turned to the Egyptian fruit bat, a vocal-learning species that babbles before mastering communication, as a child does. The results of their study, the first to raise bats in a vocal vacuum, were published this spring in the journal Science Advances. Five bat pups were reared by their respective mothers in isolation, so the pups heard no adult conversations. After weaning, the juveniles were grouped together and exposed to adult bat chatter through a speaker. A second group of five bats was raised in a colony, hearing their species' vocal interactions from birth. Whereas the group-raised bats eventually swapped early babbling for adult communication, the isolated bats stuck with their immature vocalizations well into adolescence. © 2015 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20969 - Posted: 05.23.2015

by Bas den Hond Watch your language. Words mean different things to different people – so the brainwaves they provoke could be a way to identify you. Blair Armstrong of the Basque Center on Cognition, Brain, and Language in Spain and his team recorded the brain signals of 45 volunteers as they read a list of 75 acronyms – such as FBI or DVD – then used computer programs to spot differences between individuals. The participants' responses varied enough that the programs could identify the volunteers with about 94 per cent accuracy when the experiment was repeated. The results hint that such brainwaves could be a way for security systems to verify individuals' identity. While the 94 per cent accuracy seen in this experiment would not be secure enough to guard, for example, a room or computer full of secrets, Armstrong says it's a promising start. Techniques for identifying people based on the electrical signals in their brain have been developed before. A desirable advantage of such techniques is that they could be used to verify someone's identity continuously, whereas passwords or fingerprints only provide a tool for one-off identification. Continuous verification – by face or ear recognition, or perhaps by monitoring brain activity – could in theory allow someone to interact with many computer systems simultaneously, or even with a variety of intelligent objects, without having to repeatedly enter passwords for each device. © Copyright Reed Business Information Ltd

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20959 - Posted: 05.20.2015

By Virginia Morell Like humans, dolphins, and a few other animals, North Atlantic right whales (Eubalaena glacialis) have distinctive voices. The usually docile cetaceans utter about half a dozen different calls, but the way in which each one does so is unique. To find out just how unique, researchers from Syracuse University in New York analyzed the “upcalls” of 13 whales whose vocalizations had been collected from suction cup sensors attached to their backs. An upcall is a contact vocalization that lasts about 1 to 2 seconds and rises in frequency, sounding somewhat like a deep-throated cow’s moo. Researchers think the whales use the calls to announce themselves and to “touch base” with others of their kind, they explained in a poster presented today at the Meeting of the Acoustical Society of America in Pittsburgh, Pennsylvania. After analyzing the duration and harmonic frequency of these upcalls, as well as the rate at which the frequencies changed, the scientists found that they could distinguish the voices of each of the 13 whales. They think their discovery will provide a new tool for tracking and monitoring the critically endangered whales, which number about 450 and range primarily from Florida to Newfoundland. © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 20941 - Posted: 05.18.2015

by Jessica Hamzelou GOO, bah, waahhhh! Crying is an obvious sign something is up with your little darling but beyond that, their feelings are tricky to interpret – except at playtime. Trying to decipher the meaning behind the various cries, squeaks and babbles a baby utters will have consumed many a parent. Some researchers reckon babies are simply practising to learn to speak, while others think these noises have some underlying meaning. "Babies probably aren't aware of wanting to tell us something," says Jitka Lindová, an evolutionary psychologist at Charles University in Prague, Czech Republic. Instead, she says, infants are conveying their emotions. But can adults pick up on what those emotions are? Lindová and her colleagues put 333 adults to the test. First they made 20-second recordings of five- to 10-month-old babies while they were experiencing a range of emotions. For example, noises that meant a baby was experiencing pain were recorded while they received their standard vaccinations. The team also collected recordings when infants were hungry, separated from a parent, reunited, just fed, and while they were playing. The volunteers had to listen to a selection of the recordings then guess which situation each related to. The adults could almost always tell whether a baby was distressed in some way. This makes sense – a baby's survival may depend on an adult being able to tell whether a baby is unwell, in pain or in danger. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 20878 - Posted: 05.04.2015

// by Jennifer Viegas Male species of a West African monkey communicate using at least these six main sounds: boom-boom, krak, krak-oo, hok, hok-oo and wak-oo. Key to the communication by the male Campbell's monkey is the suffix "oo," according to a new study, which is published in the latest issue of the Proceedings of the Royal Society B. By adding that sound to the end of their calls, the male monkeys have created a surprisingly rich "vocabulary" that males and females of their own kind, as well as a related species of monkey, understand. The study confirms prior suspected translations of the calls. For example, "krak" means leopard, while "krak-oo" refers to other non-leopard threats, such as falling branches. "Boom-boom-krak-oo" can roughly translate to, "Watch out for that falling tree branch." "Several aspects of communication in Campbell's monkeys allow us to draw parallels with human language," lead author Camille Coye, a researcher at the University of St. Andrews, told Discovery News. For the study, she and her team broadcast actual and artificially modified male Campbell's monkey calls to 42 male and female members of a related species: Diana monkeys. The latter's vocal responses showed that they understood the calls and replied in predicted ways. They freaked out after hearing "krak," for example, and remained on alert as they do after seeing a leopard. © 2015 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20858 - Posted: 04.29.2015

By Sid Perkins Imagine having a different accent from someone else simply because your house was farther up the same hill. For at least one species of songbird, that appears to be the case. Researchers have found that the mating songs of male mountain chickadees (Poecile gambeli, shown) differ in their duration, loudness, and the frequency ranges of individual chirps, depending in part on the elevation of their habitat in the Sierra Nevada mountains of the western United States. The songs also differed from those at similar elevations on a nearby peak. Young males of this species learn their breeding songs by listening to adult males during their first year of life, the researchers note. And because these birds don’t migrate as the seasons change, and young birds don’t settle far from where they grew up, it’s likely that the differences persist in each local group—the ornithological equivalent of having Southern drawls and Boston accents. Females may use the differences in dialect to distinguish local males from outsiders that may not be as well adapted to the neighborhood they’re trying to invade, the team reports today in Royal Society Open Science. © 2015 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20857 - Posted: 04.29.2015

By Virginia Morell Baby common marmosets, small primates found in the forests of northeastern Brazil, must learn to take turns when calling, just as human infants learn not to interrupt. Even though the marmosets (Callithrix jacchus) don’t have language, they do exchange calls. And the discovery that a young marmoset (as in the photo above) learns to wait for another marmoset to finish its call before uttering its own sound may help us better understand the origins of human language, say scientists online today in the Proceedings of the Royal Society B. No primate, other than humans, is a vocal learner, with the ability to hear a sound and imitate it—a talent considered essential to speech. But the marmoset researchers say that primates still exchange calls in a manner reminiscent of having a conversation because they wait for another to finish calling before vocalizing—and that this ability is often overlooked in discussions about the evolution of language. If this skill is learned, it would be even more similar to that of humans, because human babies learn to do this while babbling with their mothers. In a lab, the researchers recorded the calls of a marmoset youngster from age 4 months to 12 months and those of its mother or father while they were separated by a dark curtain. In adult exchanges, a marmoset makes a high-pitched contact call (listen to a recording here), and its fellow responds within 10 seconds. The study showed that the youngster’s responses varied depending on who was calling to them. They were less likely to interrupt their mothers, but not their dads—and both mothers and fathers would give the kids the “silent treatment” if they were interrupted. Thus, the youngster learns the first rule of polite conversation: Don’t interrupt! © 2015 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20828 - Posted: 04.22.2015

Jordan Gaines Lewis Hodor hodor hodor. Hodor hodor? Hodor. Hodor-hodor. Hodor! Oh, um, excuse me. Did you catch what I said? Fans of the hit HBO show Game of Thrones, the fifth season of which premieres this Sunday, know what I’m referencing, anyway. Hodor is the brawny, simple-minded stableboy of the Stark family in Winterfell. His defining characteristic, of course, is that he only speaks a single word: “Hodor.” But those who read the A Song of Ice and Fire book series by George R R Martin may know something that the TV fans don’t: his name isn’t actually Hodor. According to his great-grandmother Old Nan, his real name is Walder. “No one knew where ‘Hodor’ had come from,” she says, “but when he started saying it, they started calling him by it. It was the only word he had.” Whether he intended it or not, Martin created a character who is a textbook example of someone with a neurological condition called expressive aphasia. In 1861, French physician Paul Broca was introduced to a man named Louis-Victor Leborgne. While his comprehension and mental functioning remained relatively normal, Leborgne progressively lost the ability to produce meaningful speech over a period of 20 years. Like Hodor, the man was nicknamed Tan because he only spoke a single word: “Tan.”

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20773 - Posted: 04.10.2015

Tom Bawden Scientists have deciphered the secrets of gibbon “speech” – discovering that the apes are sophisticated communicators employing a range of more than 450 different calls to talk to their companions. The research is so significant that it could provide clues on the evolution of human speech and also suggests that other animal species could speak a more precise language than has been previously thought, according to lead author Dr Esther Clarke of Durham University. Her study found that gibbons produce different categories of “hoo” calls – relatively quiet sounds that are distinct from their more melodic “song” calls. These categories of call allow the animals to distinguish when their fellow gibbons are foraging for food, alerting them to distant noises or warning others about the presence of predators. In addition, Dr Clarke found that each category of “hoo” call can be broken down further, allowing gibbons to be even more specific in their communication. A warning about lurking raptor birds, for example, sounds different to one about pythons or clouded leopards – being pitched at a particularly low frequency to ensure it is too deep for the birds of prey to hear. The warning call denoting the presence of tigers and leopards is the same because they belong to the same class of big cats, the research found. © independent.co.uk

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20768 - Posted: 04.08.2015

Alice Park We start to talk before we can read, so hearing words, and getting familiar with their sounds, is obviously a critical part of learning a language. But in order to read, and especially in order to read quickly, our brains have to “see” words as well. At least that’s what Maximilian Riesenhuber, a neuroscientist at Georgetown University Medical Center, and his colleagues found in an intriguing brain-mapping study published in the Journal of Neuroscience. The scientists recruited a small group of college students to learn a set of 150 nonsense words, and they imaged their brains before and after the training. Before they learned the words, their brains registered them as a jumble of symbols. But after they were trained to give them a meaning, the words looked more like familiar words they used every day, like car, cat or apple. The difference in way the brain treated the words involved “seeing” them rather than sounding them out. The closest analogy would be for adults learning a foreign language based on a completely different alphabet system. Students would have to first learn the new alphabet, assigning sounds to each symbol, and in order to read, they would have to sound out each letter to put words together. In a person’s native language, such reading occurs in an entirely different way.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20719 - Posted: 03.25.2015

By Nicholas Weiler Where did the thief go? You might get a more accurate answer if you ask the question in German. How did she get away? Now you might want to switch to English. Speakers of the two languages put different emphasis on actions and their consequences, influencing the way they think about the world, according to a new study. The work also finds that bilinguals may get the best of both worldviews, as their thinking can be more flexible. Cognitive scientists have debated whether your native language shapes how you think since the 1940s. The idea has seen a revival in recent decades, as a growing number of studies suggested that language can prompt speakers to pay attention to certain features of the world. Russian speakers are faster to distinguish shades of blue than English speakers, for example. And Japanese speakers tend to group objects by material rather than shape, whereas Koreans focus on how tightly objects fit together. Still, skeptics argue that such results are laboratory artifacts, or at best reflect cultural differences between speakers that are unrelated to language. In the new study, researchers turned to people who speak multiple languages. By studying bilinguals, “we’re taking that classic debate and turning it on its head,” says psycholinguist Panos Athanasopoulos of Lancaster University in the United Kingdom. Rather than ask whether speakers of different languages have different minds, he says, “we ask, ‘Can two different minds exist within one person?’ ” Athanasopoulos and colleagues were interested in a particular difference in how English and German speakers treat events. © 2015 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 20700 - Posted: 03.19.2015

By Matthew J.X. Malady One hour and seven minutes into the decidedly hit-or-miss 1996 comedy Black Sheep, the wiseass sidekick character played by David Spade finds himself at an unusually pronounced loss for words. While riding in a car driven by Chris Farley’s character, he glances at a fold-up map and realizes he somehow has become unfamiliar with the name for paved driving surfaces. “Robes? Rouges? Rudes?” Nothing seems right. Even when informed by Farley that the word he’s looking for is roads, Spade’s character continues to struggle: “Rowds. Row-ads.” By this point, he’s become transfixed. “That’s a total weird word,” he says, “isn’t it?” Now, it’s perhaps necessary to mention that, in the context of the film, Spade’s character is high off nitrous oxide that has leaked from the car’s engine boosters. But never mind that. Row-ad-type word wig outs similar to the one portrayed in that movie are things that actually happen, in real life, to people with full and total control over their mental capacities. These wordnesias sneak up on us at odd times when we’re writing or reading text. I was in a full-on wordnesiac state. On one of my spelling attempts, I think I even threw a K into the mix. It was bad. Here’s how they work: Every now and again, for no good or apparent reason, you peer at a standard, uncomplicated word in a section of text and, well, go all row-ads on it. If you’re typing, that means inexplicably blanking on how to spell something easy like cake or design. The reading version of wordnesia occurs when a common, correctly spelled word either seems as though it can’t possibly be spelled correctly, or like it’s some bizarre combination of letters you’ve never before seen—a grouping that, in some cases, you can’t even imagine being the proper way to compose the relevant term. © 2014 The Slate Group LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20688 - Posted: 03.14.2015