Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 141 - 160 of 690

Bruce Bower Youngsters befuddled by printed squiggles on the pages of a storybook nonetheless understand that a written word, unlike a drawing, stands for a specific spoken word, say psychologist Rebecca Treiman of Washington University in St. Louis and her colleagues. Children as young as 3 can be tested for a budding understanding of writing’s symbolic meaning, the researchers conclude January 6 in Child Development. “Our results show that young children have surprisingly advanced knowledge about the fundamental properties of writing,” Treiman says. “This knowledge isn’t explicitly taught to children but probably gained through early exposure to print from sources such as books and computers.” Researchers and theorists have previously proposed that children who cannot yet read don’t realize that a written word corresponds to a particular spoken word. Studies have found, for instance, that nonliterate 3- to 5-year-olds often assign different meanings to the same word, such as girl, depending on whether that word appears under a picture of a girl or a cup. Treiman’s investigation “is the first to show that kids as young as 3 have the insight that print stands for something beyond what’s scripted on the page,” says psychologist Kathy Hirsh-Pasek of Temple University in Philadelphia. Preschoolers who are regularly read to have an advantage in learning that written words have specific meanings, suspects psychologist Roberta Golinkoff of the University of Delaware in Newark. © Society for Science & the Public 2000 - 2015.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21761 - Posted: 01.08.2016

Angus Chen English bursts with consonants. We have words that string one after another, like angst, diphthong and catchphrase. But other languages keep more vowels and open sounds. And that variability might be because they evolved in different habitats. Consonant-heavy syllables don't carry very well in places like windy mountain ranges or dense rainforests, researchers say. "If you have a lot of tree cover, for example, [sound] will reflect off the surface of leaves and trunks. That will break up the coherence of the transmitted sound," says Ian Maddieson, a linguist at the University of New Mexico. That can be a real problem for complicated consonant-rich sounds like "spl" in "splice" because of the series of high-frequency noises. In this case, there's a hiss, a sudden stop and then a pop. Where a simple, steady vowel sound like "e" or "a" can cut through thick foliage or the cacophony of wildlife, these consonant-heavy sounds tend to get scrambled. Hot climates might wreck a word's coherence as well, since sunny days create pockets of warm air that can punch into a sound wave. "You disrupt the way it was originally produced, and it becomes much harder to recognize what sound it was," Maddieson says. "In a more open, temperate landscape, prairies in the Midwest of the United States [or in Georgia] for example, you wouldn't have that. So the sound would be transmitted with fewer modifications." © 2015 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21616 - Posted: 11.07.2015

Paul Ibbotson and Michael Tomasello The natural world is full of wondrous adaptations such as camouflage, migration and echolocation. In one sense, the quintessentially human ability to use language is no more remarkable than these other talents. However, unlike these other adaptations, language seems to have evolved just once, in one out of 8.7 million species on earth today. The hunt is on to explain the foundations of this ability and what makes us different from other animals. The intellectual most closely associated with trying to pin down that capacity is Noam Chomsky. He proposed a universal grammatical blueprint that was unique to humans. This blueprint operated like a computer program. Instead of running Windows or Excel, this program performed “operations” on language – any language. Regardless of which of the 6000+ human languages that this code could be exposed to, it would guide the learner to the correct adult grammar. It was a bold claim: despite the surface variations we hear between Swahili, Japanese and Latin, they are all run on the same piece of underlying software. As ever, remarkable claims require remarkable evidence, and in the 50 years since some of these ideas were laid out, history has not been kind. First, it turned out that it is really difficult to state what is “in” universal grammar in a way that does justice to the sheer diversity of human languages. Second, it looks as if kids don’t learn language in the way predicted by a universal grammar; rather, they start with small pockets of reliable patterns in the language they hear, such as Where’s the X?, I wanna X, More X, It’s a X, I’m X-ing it, Put X here, Mommy’s X-ing it, Let’s X it, Throw X, X gone, I X-ed it, Sit on the X, Open X, X here, There’s a X, X broken … and gradually build their grammar on these patterns, from the “bottom up”. © 2015 Guardian News and Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21607 - Posted: 11.06.2015

By Hanae Armitage About 70 million people worldwide stutter when they speak, and it turns out humans aren’t the only ones susceptible to verbal hiccups. Scientists at this year’s Society for Neuroscience Conference in Chicago, Illinois, show that mice, too, can stumble in their vocalizations. In humans, stuttering has long been linked to a mutation in the “housekeeping” gene Gnptab, which maintains basic levels of cellular function. To cement this curious genetic link, researchers decided to induce the Gnptab “stutter mutation” in mice. They suspected the change would trigger a mouse version of stammering. But deciphering stuttered squeaks is no easy task, so researchers set up a computerized model to register stutters through a statistical analysis of vocalizations. After applying the model to human speech, researchers boiled the verbal impediment down to two basic characteristics—fewer vocalizations in a given period of time and longer gaps in between each vocalization. For example, in 1 minute, stuttering humans made just 90 vocalizations compared with 125 for non-stutterers. Using these parameters to evaluate mouse vocalizations, researchers were able to identify stuttering mice over a 3.5-minute period. As expected, the mice carrying the mutated gene had far fewer vocalizations, with longer gaps between “speech” compared with their unmodified littermates—Gnptab mutant mice had about 80 vocalizations compared with 190 in the nonmutant mice. The findings not only supply evidence for Gnptab’s role in stuttering, but they also show that its function remains relatively consistent across multiple species. Scientists say the genetic parallel could help reveal the neural mechanisms behind stuttering, be it squeaking or speaking. © 2015 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21527 - Posted: 10.20.2015

By Christopher Intagliata "Babies come prepared to learn any of the world's languages." Alison Bruderer, a cognitive scientist at the University of British Columbia. "Which means no matter where they're growing up in the world, their brains are prepared to pick up the language they're listening to around them." And listen they do. But another key factor to discerning a language’s particular sounds may be for babies to move their tongues as they listen. Bruderer and her colleagues tested that notion by sitting 24 sixth-month-olds in front of a video screen and displaying a checkerboard pattern, while they played one of two tracks: a single, repeated "D" sound in Hindi, <> or two slightly different, alternating "D" sounds. <> The idea here is that babies have a short attention span, so novel things hold their gaze. And indeed, the babies did stare at the screen longer while the alternating "D"s played than for the single “D”—indicating they could detect the novelty. Until, that is, the researchers blocked the babies' tongue movements by having them suck on a teething device. Then the effect disappeared, with the babies unable to differentiate [single D sound] from [alternating D sounds]. And when the babies used a different teether that did not block tongue movement, they once again appeared to comprehend the difference between the Ds. The study is in the Proceedings of the National Academy of Sciences. © 2015 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 21517 - Posted: 10.16.2015

by Laura Sanders Like every other person who carries around a smartphone, I take a lot of pictures, mostly of my kids. I thought I was bad with a few thousand snaps filling my phone’s memory. But then I talked to MIT researcher Deb Roy. For three years, Roy and a small group of researchers recorded every waking moment of Roy’s son’s life at home, amassing over 200,000 hours of video and audio recordings. Roy’s intention wasn’t to prove he was the proudest parent of all time. Instead, he wanted to study how babies learn to say words. As a communication and machine learning expert, Roy and his wife Rupal Patel, also a speech researcher, recognized that having a child would be a golden research opportunity. The idea to amass this gigantic dataset “was kicking around and something we thought about for years,” Roy says. So after a pregnancy announcement and lots of talking and planning and “fascinating conversations” with the university administration in charge of approving human experiments, the researchers decided to go for it. To the delight of his parents, a baby boy arrived in 2005. When Roy and Patel brought their newborn home, the happy family was greeted by 11 cameras and 14 microphones, tucked up into the ceiling. From that point on, cameras rolled whenever the baby was awake. © Society for Science & the Public 2000 - 2015

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 21429 - Posted: 09.22.2015

By Emily Chung, Whadd'ya at? Ow ya goin'? If you were at a picnic with a bunch of Newfoundlanders or Australians, those are the greetings you might fling around. Similarly, scientists who eavesdrop on sperm whales – Moby Dick's species — have found they also have distinct "dialects." And a new study suggests like human dialects, they arise through cultural learning. "Cultural transmission seems key to the partitioning of sperm whales into… clans," the researchers wrote in a paper published today in the journal Nature Communications. Sperm whales live around the world, mainly in deeper waters far offshore. The solitary males live in colder areas, and roam in Canadian waters in areas where the ocean depth is more than 1000 metres, says Mauricio Cantor, the Dalhousie University PhD. student who led the new study with Hal Whitehead, a Dalhousie biology professor. The females live in warmer, more southern waters, in loose family groups of around seven to 12 whales – sisters, aunts, grandmothers, cousins, and the occasional unrelated friend and their calves. Sometimes, they meet up with other families for gatherings of up to 200 whales, similar to human picnics or festivals. These can last from a few hours to a few days. The whales that gather in these groups, called clans, have distinct "dialects" of patterns of clicks called codas that are distinct from the clicks they use in echolocation when they're hunting for food. They use codas talk to each other when they surface between dives. ©2015 CBC/Radio-Canada.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21388 - Posted: 09.09.2015

By Michael Balter If you find yourself along the Atlantic coastal border between Spain and France, here are some phrases that might come in handy: Urte askotarako! (“Pleased to meet you!”), Eskerrik asko! (“Thank you!”), and Non daude komunak? (“Where is the toilet?”). Welcome to Basque Country, where many people speak a musical language that has no known relationship to any other tongue. Many researchers have assumed that Basque must represent a “relic language” spoken by the hunter-gatherers who occupied Western Europe before farmers moved in about 7500 years ago. But a new study contradicts that idea and concludes that the Basques are descended from a group of those early farmers that kept to itself as later waves of migration swept through Europe. The great majority of Europeans speak languages belonging to the Indo-European family, which includes such diverse tongues as German, Greek, Spanish, and French; a smaller number speak Uralic languages like Finnish, Hungarian, and Estonian. But Basque stands truly alone; what linguists call a “language isolate.” This uniqueness is a source of pride among the nearly 700,000 Basque speakers, some of whom have called for the creation of an independent nation separate from Spain and France. For scientists, however, Basque is a major unsolved mystery. In the 19th century, some anthropologists claimed that Basques had differently shaped skulls than other Europeans. Yet although that racial idea had been discredited by the 20th century, researchers have been able to show that the Basques have a striking number of genetic differences that set them apart from other Europeans. Variations in their immune cells and proteins include a higher-than-normal frequency of Rh-negative blood types, for example. Those findings led to the hypothesis that the Basques descended from early hunter-gatherers who had somehow avoided being genetically overwhelmed when farming spread into Europe from the Near East. But some recent studies have questioned just how genetically distinct the Basques really are. © 2015 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21384 - Posted: 09.08.2015

By Amy Ellis Nutt There may finally be an explanation for why men are often less verbally adept than women at expressing themselves. It's the testosterone. Scientists have long known that language development is different between boys and girls. But in scanning the brains of 18 individuals before and after undergoing hormone treatment for female-to-male sex reassignment, Austrian and Dutch researchers found evidence of specific brain structure differences. In particular, they found two areas in the left hemispheres of the transgender men that lost gray matter volume during high-testosterone treatment over a period of four weeks: Broca's area, which is involved in the production of language, and Wernicke's area, which processes language. All of which suggests, according to the study, which was presented this week at the European College of Neuropsychopharmacology Congress, why verbal abilities are often stronger in women than men. "In more general terms, these findings may suggest that a genuine difference between the brains of women and men is substantially attributable to the effects of circulating hormones," said one of the researchers at the conference, Rupert Lanzenberger from Vienna. "Moreover, the hormonal influence on human brain structure goes beyond the early developmental phase and is still present in adulthood." Previous research has shown that higher testosterone is linked to smaller vocabulary in children and also that verbal fluency skills seemed to decrease after female-to-male sex reassignment testosterone treatment.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 8: Hormones and Sex
Link ID: 21381 - Posted: 09.03.2015

Bill McQuay The natural world is abuzz with the sound of animals communicating — crickets, birds, even grunting fish. But scientists learning to decode these sounds say the secret signals of African elephants — their deepest rumblings — are among the most intriguing calls any animal makes. Katy Payne, the same biologist who recognized song in the calls of humpback whales in the 1960s, went on to help create the Elephant Listening Project in the Central African Republic in the 1980s. At the time, Payne's team was living in shacks in a dense jungle inhabited by hundreds of rare forest elephants. That's where one of us — Bill McQuay — first encountered the roar of an elephant in 2002, while reporting a story for an NPR-National Geographic collaboration called Radio Expeditions. Here's how Bill remembers that day in Africa: I was walking through this rainforest to an observation platform built up in a tree — out of the reach of the elephants. I climbed up onto the platform, a somewhat treacherous exercise with all my recording gear. Then I set up my recording equipment, put on the headphones, and started listening. That first elephant roar sounded close. But I was so focused on the settings on my recorder that I didn't bother to look around. The second roar sounded a lot closer. I thought, this is so cool! What I didn't realize was, there was this huge bull elephant standing right underneath me — pointing his trunk up at me, just a few feet away. Apparently he was making a "dominance display." © 2015 NPR

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21319 - Posted: 08.20.2015

Alexander Christie-Miller You could say they sent the first tweets. An ancient whistling language that sounds a little like birdsong has been found to use both sides of the brain – challenging the idea that the left side is all important for communicating. The whistling language is still used by around 10,000 people in the mountains of north-east Turkey, and can carry messages as far as 5 kilometres. Researchers have now shown that this language involves the brain’s right hemisphere, which was already known to be important for understanding music. Until recently, it was thought that the task of interpreting language fell largely to the brain’s left hemisphere. Onur Güntürkün of Ruhr University Bochum in Germany wondered whether the musical melodies and frequencies of whistled Turkish might require people to use both sides of their brain to communicate. His team tested 31 fluent whistlers by playing slightly different spoken or whistled syllables into their left and right ears at the same time, and asking them to say what they heard. The left hemisphere depends slightly more on sounds received by the right ear, and vice versa for the right hemisphere. By comparing the number of times the whistlers reported the syllables that had been played into either their right or left ear, they could tell how often each side of the brain was dominant. As expected, when the syllables were spoken, the right ear and left hemisphere were dominant 75 per cent of the time. But when syllables were whistled, the split between right and left dominance was about even. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21309 - Posted: 08.18.2015

By Perri Klass, A little more than a year ago, the American Academy of Pediatrics issued a policy statement saying that all pediatric primary care should include literacy promotion, starting at birth. That means pediatricians taking care of infants and toddlers should routinely be advising parents about how important it is to read to even very young children. The policy statement, which I wrote with Dr. Pamela C. High, included a review of the extensive research on the links between growing up with books and reading aloud, and later language development and school success. But while we know that reading to a young child is associated with good outcomes, there is only limited understanding of what the mechanism might be. Two new studies examine the unexpectedly complex interactions that happen when you put a small child on your lap and open a picture book. This month, the journal Pediatrics published a study that used functional magnetic resonance imaging to study brain activity in 3-to 5-year-old children as they listened to age-appropriate stories. The researchers found differences in brain activation according to how much the children had been read to at home. Children whose parents reported more reading at home and more books in the home showed significantly greater activation of brain areas in a region of the left hemisphere called the parietal-temporal-occipital association cortex. This brain area is “a watershed region, all about multisensory integration, integrating sound and then visual stimulation,” said the lead author, Dr. John S. Hutton, a clinical research fellow at Cincinnati Children’s Hospital Medical Center. This region of the brain is known to be very active when older children read to themselves, but Dr. Hutton notes that it also lights up when younger children are hearing stories. What was especially novel was that children who were exposed to more books and home reading showed significantly more activity in the areas of the brain that process visual association, even though the child was in the scanner just listening to a story and could not see any pictures. © 2015 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 21308 - Posted: 08.18.2015

Geoff Brumfiel Learning to make sounds by listening to others is a skill that helps make us human. But research now suggests a species of monkey may have evolved similar abilities. Marmosets have the capacity to learn calls from their parents, according to research published Thursday in the journal Science. The results mean that studying marmosets might provide insights into developmental disorders found in humans. It also suggests that vocal learning may be more widespread than many researchers thought. Many animals can link sounds with meaning. Dogs respond to simple calls; chimpanzees can even communicate with people using sign language. But the ability to hear a sound and mimic it is possessed by only a small number of species: primarily song birds and humans. "We didn't think that mammals and primates in particular — besides us — had any type of vocal learning," says Asif Ghazanfar, a neuroscientist at Princeton University who led the new study. Enter the small, adorable common marmoset. These fuzzy South American primates look more like squirrels than a monkey. "They're cute, and they smell. They wash themselves in their own urine," Ghazanfar says. "I'm not sure why they do that." But once you get over the stink, these little guys are interesting. Marmoset mommies always give birth to twins and they need help rearing them. So, unlike many mammal species, fathers lend a hand, along with siblings and other community members. Ghazanfar thinks all that child care is what gives marmosets another special trait: They're super talkative. "They're chattering nonstop," he says. "That is also very different from our close relatives the chimpanzees." © 2015 NPR

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 8: Hormones and Sex
Link ID: 21300 - Posted: 08.15.2015

// by Richard Farrell Bonobos have a capacity to do something human infants have been shown to do: use a single sound whose meaning varies based on context, a form of "flexible" communication previously thought specific to humans. The finding was made by researchers from the University of Birmingham and the University of Neuchatel, in a paper just published in the journal Peer J. The newly identified bonobo call is a short, high-pitched "peep," made with a closed mouth. The scientists studied the call's acoustic structure and observed that it did not change between what they termed "neutral" and "positive" circumstances (for example, between activities such as feeding or resting), suggesting that other bonobos receiving the call would need to weigh contextual information to discern its meaning. Human babies do something similarly flexible, using sounds called protophones -- different from highly specific sounds such as crying or laughter -- that are made independent of how they are feeling emotionally. The appearance of this capability in the first year of life is "a critical step in the development of vocal language and may have been a critical step in the evolution of human language," an earlier study on infant vocalization noted. The find challenges the idea that calls from primates such as bonobos -- which, along with chimpanzees, are our closest relatives -- are strictly matched with specific contexts and emotions, whether those sounds are territorial barks or shrieks of alarm. © 2015 Discovery Communications, LLC.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21265 - Posted: 08.05.2015

By Michael Balter Have you ever wondered why you say “The boy is playing Frisbee with his dog” instead of “The boy dog his is Frisbee playing with”? You may be trying to give your brain a break, according to a new study. An analysis of 37 widely varying tongues finds that, despite the apparent great differences among them, they share what might be a universal feature of human language: All of them have evolved to make communication as efficient as possible. Earth is a veritable Tower of Babel: Up to 7000 languages are still spoken across the globe, belonging to roughly 150 language families. And they vary widely in the way they put sentences together. For example, the three major building blocks of a sentence, subject (S), verb (V), and object (O), can come in three different orders. English and French are SVO languages, whereas German and Japanese are SOV languages; a much smaller number, such as Arabic and Hebrew, use the VSO order. (No well-documented languages start sentences or clauses with the object, although some linguists have jokingly suggested that Klingon might do so.) Yet despite these different ways of structuring sentences, previous studies of a limited number of languages have shown that they tend to limit the distance between words that depend on each other for their meaning. Such “dependency” is key if sentences are to make sense. For example, in the sentence “Jane threw out the trash,” the word “Jane” is dependent on “threw”—it modifies the verb by telling us who was doing the throwing, just as we need “trash” to know what was thrown, and “out” to know where the trash went. Although “threw” and “trash” are three words away from each other, we can still understand the sentence easily. © 2015 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21263 - Posted: 08.04.2015

By Ariana Eunjung Cha Think you have your hands full making sure your baby is fed and clean and gets enough sleep? Here's another thing for the list: developing your child's social skills by the way you talk. People used to think that social skills were something kids were born with, not taught. But a growing body of research shows that the environment a child grows up in as an infant and toddler can have a major impact on how they interact with others as they get older. And it turns out that a key factor may be the type of language they hear around them, even at an age when all they can do is babble. Psychologists at the University of York observed 40 mothers and their babies at 10, 12, 16 and 20 months and logged the kind of language mothers used during play. They were especially interested in "mind-related comments," which include inferences about what someone is thinking when a behavior or action happens. Elizabeth Kirk, a lecturer at the university who is the lead author of the study, published in the British Journal of Developmental Psychology on Monday, gave this as an example: If an infant has difficulty opening a door on a toy, the parent might comment that the child appears "frustrated." Then researchers revisited the children when they were 5 or 6 years of age and assessed their socio-cognitive ability. The test involved reading a story and having the children answer comprehension questions that show whether they understood the social concept -- persuasion, joke, misunderstanding, white lies, lies, and so forth -- that was represented.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 21239 - Posted: 07.30.2015

by Sarah Zielinski It may not be polite to eavesdrop, but sometimes, listening in on others’ conversations can provide valuable information. And in this way, humans are like most other species in the animal world, where eavesdropping is a common way of gathering information about potential dangers. Because alarm calls can vary from species to species, scientists have assumed that eavesdropping on these calls of “danger!” requires some kind of learning. Evidence of that learning has been scant, though. The only study to look at this topic tested five golden-mantled ground squirrels and found that the animals may have learned to recognize previously unknown alarm calls. But the experiment couldn’t rule out other explanations for the squirrels’ behavior, such as that the animals had simply become more wary in general. So Robert Magrath and colleagues at Australian National University in Canberra turned to small Australian birds called superb fairy-wrens. In the wild, these birds will flee to safety when they hear unfamiliar sounds that sound like their own alarm calls, but not when they hear alarm calls that sound different from their own. There’s an exception, though: They’ll take to cover in response to the alarm calls of other species that are common where they live. That suggests the birds learn to recognize those calls. In the lab, the team played the alarm call from a thornbill or a synthetic alarm call for 10 fairy-wrens. The birds didn’t respond to the noise. Then the birds went through two days of training in which the alarm call was played as a mock predator glided overhead. Another group of birds heard the calls but there was no pretend predator. © Society for Science & the Public 2000 - 2015

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 21180 - Posted: 07.18.2015

By Lauran Neergaard, New research suggests it may be possible to predict which preschoolers will struggle to read — and it has to do with how the brain deciphers speech when it's noisy. Scientists are looking for ways to tell, as young as possible, when children are at risk for later learning difficulties so they can get early interventions. There are some simple pre-reading assessments for preschoolers. But Northwestern University researchers went further and analyzed brain waves of children as young as three. How well youngsters' brains recognize specific sounds — consonants — amid background noise can help identify who is more likely to have trouble with reading development, the team reported Tuesday in the journal PLOS Biology. If the approach pans out, it may provide "a biological looking glass," said study senior author Nina Kraus, director of Northwestern's Auditory Neuroscience Laboratory. "If you know you have a three-year-old at risk, you can as soon as possible begin to enrich their life in sound so that you don't lose those crucial early developmental years." Connecting sound to meaning is a key foundation for reading. For example, preschoolers who can match sounds to letters earlier go on to read more easily. Auditory processing is part of that pre-reading development: If your brain is slower to distinguish a "D" from a "B" sound, for example, then recognizing words and piecing together sentences could be affected, too. What does noise have to do with it? It stresses the system, as the brain has to tune out competing sounds to selectively focus, in just fractions of milliseconds. And consonants are more vulnerable to noise than vowels, which tend to be louder and longer, Kraus explained. ©2015 CBC/Radio-Canada

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 21173 - Posted: 07.15.2015

Henry Nicholls Andy Russell had entered the lecture hall late and stood at the back, listening to the close of a talk by Marta Manser, an evolutionary biologist at the University of Zurich who works on animal communication. Manser was explaining some basic concepts in linguistics to her audience, how humans use meaningless sounds or “phonemes” to generate a vast dictionary of meaningful words. In English, for instance, just 40 different phonemes can be resampled into a rich vocabulary of some 200,000 words. But, explained Manser, this linguistic trick of reorganising the meaningless to create new meaning had not been demonstrated in any non-human animal. This was back in 2012. Russell’s “Holy shit, man” excitement was because he was pretty sure he had evidence for phoneme structuring in the chestnut-crowned babbler, a bird he’s been studying in the semi-arid deserts of south-east Australia for almost a decade. After the talk, Russell (a behavioural ecologist at the University of Exeter) travelled to Zurich to present his evidence to Manser’s colleague Simon Townsend, whose research explores the links between animal communication systems and human language. The fruits of their collaboration are published today in PLoS Biology. One of Russell’s students Jodie Crane had been recording the calls of the chestnut-crowned babbler for her PhD. The PLoS Biology paper focuses on two of these calls, which appear to be made up of two identical elements, just arranged in a different way. © 2015 Guardian News and Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21110 - Posted: 06.30.2015

Emma Bowman In a small, sparse makeshift lab, Melissa Malzkuhn practices her range of motion in a black, full-body unitard dotted with light-reflecting nodes. She's strapped on a motion capture, or mocap, suit. Infrared cameras that line the room will capture her movement and translate it into a 3-D character, or avatar, on a computer. But she's not making a Disney animated film. Three-dimensional motion capture has developed quickly in the last few years, most notably as a Hollywood production tool for computer animation in films like Planet of the Apes and Avatar. Behind the scenes though, leaders in the deaf community are taking on the technology to create and improve bilingual learning tools in American Sign Language. Malzkuhn has suited up to record a simple nursery rhyme. Being deaf herself, she spoke with NPR through an interpreter. "I know in English there's just a wealth of nursery rhymes available, but we really don't see as much in ASL," she says. "So we're gonna be doing some original work here in developing nursery rhymes." That's because sound-based rhymes don't cross over well into the visual language of ASL. Malzkuhn heads the Motion Light Lab, or ML2. It's the newest hub of the National Science Foundation Science of Learning Center, Visual Language and Visual Learning (VL2) at Gallaudet University, the premier school for deaf and hard of hearing students. © 2015 NPR

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 20: ; Chapter 15: Language and Lateralization
Link ID: 21107 - Posted: 06.29.2015