Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 508

|By Marissa Fessenden Songbirds stutter, babble when young, become mute if parts of their brains are damaged, learn how to sing from their elders and can even be "bilingual"—in other words, songbirds' vocalizations share a lot of traits with human speech. However, that similarity goes beyond behavior, researchers have found. Even though humans and birds are separated by millions of years of evolution, the genes that give us our ability to learn speech have much in common with those that lend birds their warble. A four-year long effort involving more than 100 researchers around the world put the power of nine supercomputers into analyzing the genomes of 48 species of birds. The results, published this week in a package of eight articles in Science and 20 papers in other journals, provides the most complete picture of the bird family tree thus far. The project has also uncovered genetic signatures in song-learning bird brains that have surprising similarities to the genetics of speech in humans, a finding that could help scientists study human speech. The analysis suggests that most modern birds arose in an impressive speciation event, a "big bang" of avian diversification, in the 10 million years immediately following the extinction of dinosaurs. This period is more recent than posited in previous genetic analyses, but it lines up with the fossil record. By delving deeper into the rich data set, research groups identified when birds lost their teeth, investigated the relatively slow evolution of crocodiles and outlined the similarities between birds' and humans' vocal learning ability, among other findings. © 2014 Scientific American,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20423 - Posted: 12.16.2014

by Colin Barras It's not just great minds that think alike. Dozens of the genes involved in the vocal learning that underpins human speech are also active in some songbirds. And knowing this suggests that birds could become a standard model for investigating the genetics of speech production – and speech disorders. Complex language is a uniquely human trait, but vocal learning – the ability to pick up new sounds by imitating others – is not. Some mammals, including whales, dolphins and elephants, share our ability to learn new vocalisations. So do three groups of birds: the songbirds, parrots and hummingbirds. The similarities between vocal learning in humans and birds are not just superficial. We know, for instance, that songbirds have specialised vocal learning brain circuits that are similar to those that mediate human speech. What's more, a decade ago we learned that FOXP2, a gene known to be involved in human language, is also active in "area X" of the songbird brain – one of the brain regions involved in those specialised vocal learning circuits. Andreas Pfenning at the Massachusetts Institute of Technology and his colleagues have now built on these discoveries. They compared maps of genetic activity – transcriptomes – in brain tissue taken from the zebra finch, budgerigar and Anna's hummingbird, representing the three groups of vocal-learning birds. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20414 - Posted: 12.13.2014

By JOHN McWHORTER “TELL me, why should we care?” he asks. It’s a question I can expect whenever I do a lecture about the looming extinction of most of the world’s 6,000 languages, a great many of which are spoken by small groups of indigenous people. For some reason the question is almost always posed by a man seated in a row somewhere near the back. Asked to elaborate, he says that if indigenous people want to give up their ancestral language to join the modern world, why should we consider it a tragedy? Languages have always died as time has passed. What’s so special about a language? The answer I’m supposed to give is that each language, in the way it applies words to things and in the way its grammar works, is a unique window on the world. In Russian there’s no word just for blue; you have to specify whether you mean dark or light blue. In Chinese, you don’t say next week and last week but the week below and the week above. If a language dies, a fascinating way of thinking dies along with it. I used to say something like that, but lately I have changed my answer. Certainly, experiments do show that a language can have a fascinating effect on how its speakers think. Russian speakers are on average 124 milliseconds faster than English speakers at identifying when dark blue shades into light blue. A French person is a tad more likely than an Anglophone to imagine a table as having a high voice if it were a cartoon character, because the word is marked as feminine in his language. This is cool stuff. But the question is whether such infinitesimal differences, perceptible only in a laboratory, qualify as worldviews — cultural standpoints or ways of thinking that we consider important. I think the answer is no. Furthermore, extrapolating cognitive implications from language differences is a delicate business. In Mandarin Chinese, for example, you can express If you had seen my sister, you’d have known she was pregnant with the same sentence you would use to express the more basic If you see my sister, you know she’s pregnant. One psychologist argued some decades ago that this meant that Chinese makes a person less sensitive to such distinctions, which, let’s face it, is discomfitingly close to saying Chinese people aren’t as quick on the uptake as the rest of us. The truth is more mundane: Hypotheticality and counterfactuality are established more by context in Chinese than in English. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20401 - Posted: 12.08.2014

| By Carolyn Gregoire When reading about Harry Potter's adventures fighting Lord Voldemort or flying around the Quidditch field on his broomstick, we can become so absorbed in the story that the characters and events start to feel real. And according to neuroscientists, there's a good reason for this. Researchers in the Machine Learning Department at Carnegie Mellon University scanned the brains of Harry Potter readers, and found that reading about Harry's adventures activates the same brain regions used to perceive people's intentions and actions in the real world. The researchers performed fMRI scans on a group of eight study participants while they read chapter nine of Harry Potter and the Sorcerer's Stone, which describes Harry's first flying lesson. Then, they analyzed the scans, one cubic millimeter at a time, for four-word segments of the chapter in order to build the first integrated computational model of reading. The researchers created a technique such that for each two-second fMRI scan, the readers would see four words. And for each word, the researchers identified 195 detailed features that the brain would process. Then, an algorithm was applied to analyze the activation of each millimeter of the brain for each two-second scan, associating various word features with different regions of the brain. Using the model, the researchers were able to predict which of two passages the subjects were reading with a 74 percent accuracy rate. ©2014 TheHuffingtonPost.com, Inc

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20386 - Posted: 12.03.2014

By Gabe Bergado It's not news that reading has countless benefits: Poetry stimulates parts of the brain linked to memory and sparks self-reflection; kids who read the Harry Potter books tend to be better people. But what about people who only read newspapers? Or people who scan Twitter all day? Are those readers' brains different from literary junkies who peruse the pages of 19th century fictional classics? Short answer: Yes — reading enhances connectivity in the brain. But readers of fiction? They're a special breed. The study: A 2013 Emory University study looked at the brains of fiction readers. Researchers compared the brains of people after they read to the brains of people who didn't read. The brains of the readers — they read Robert Harris' Pompeii over a nine-day period at night — showed more activity in certain areas than those who didn't read. Specifically, researchers found heightened connectivity in the left temporal cortex, part of the brain typically associated with understanding language. The researchers also found increased connectivity in the central sulcus of the brain, the primary sensory region, which helps the brain visualize movement. When you visualize yourself scoring a touchdown while playing football, you can actually somewhat feel yourself in the action. A similar process happens when you envision yourself as a character in a book: You can take on the emotions they are feeling. It may sound hooey hooey, but it's true: Fiction readers make great friends as they tend to be more aware of others' emotions. Copyright © Mic Network Inc.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 11: Emotions, Aggression, and Stress
Link ID: 20385 - Posted: 12.03.2014

|By Jason G. Goldman A sharp cry pierces the air. Soon a worried mother deer approaches the source of the sound, expecting to find her fawn. But the sound is coming from a speaker system, and the call isn't that of a baby deer at all. It's an infant fur seal's. Because deer and seals do not live in the same habitats, mother deer should not know how baby seal screams sound, reasoned biologists Susan Lingle of the University of Winnipeg and Tobias Riede of Midwestern University, who were running the acoustic experiment. So why did a mother deer react with concern? Over two summers, the researchers treated herds of mule deer and white-tailed deer on a Canadian farm to modified recording of the cries of a wide variety of infant mammals—elands, marmots, bats, fur seals, sea lions, domestic cats, dogs and humans. By observing how mother deer responded, Lingle and Riede discovered that as long as the fundamental frequency was similar to that of their own infants' calls, those mothers approached the speaker as if they were looking for their offspring. Such a reaction suggests deep commonalities among the cries of most young mammals. (The mother deer did not show concern for white noise, birdcalls or coyote barks.) Lingle and Riede published their findings in October in the American Naturalist. Researchers had previously proposed that sounds made by different animals during similar experiences—when they were in pain, for example—would share acoustic traits. “As humans, we often ‘feel’ for the cry of young animals,” Lingle says. That empathy may arise because emotions are expressed in vocally similar ways among mammals. © 2014 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20368 - Posted: 11.29.2014

by Aviva Rutkin THERE is only one real rule to conversing with a baby: talking is better than not talking. But that one rule can make a lifetime of difference. That's the message that the US state of Georgia hopes to send with Talk With Me Baby, a public health programme devoted to the art of baby talk. Starting in January, nurses will be trained in the best way to speak to babies to help them learn language, based on what the latest neuroscience says. Then they, along with teachers and nutritionists, will model this good behaviour for the parents they meet. Georgia hopes to expose every child born in 2015 in the Atlanta area to this speaking style; by 2018, the hope is to reach all 130,000 or so newborns across the state. Talk With Me Baby is the latest and largest attempt to provide "language nutrition" to infants in the US – a rich quantity and variety of words supplied at a critical time in the brain's development. Similar initiatives have popped up in Providence, Rhode Island, where children have been wearing high-tech vests that track every word they hear, and Hollywood, where the Clinton Foundation has encouraged television shows like Parenthood and Orange is the New Black to feature scenes demonstrating good baby talk. "The idea is that language is as important to the brain as food is to physical growth," says Arianne Weldon, director of Get Georgia Reading, one of several partner organisations involved in Talk With Me Baby. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20367 - Posted: 11.29.2014

Mo Costandi A team of neuroscientists in America say they have rediscovered an important neural pathway that was first described in the late nineteenth century but then mysteriously disappeared from the scientific literature until very recently. In a study published today in Proceedings of the National Academy of Sciences, they confirm that the prominent white matter tract is present in the human brain, and argue that it plays an important and unique role in the processing of visual information. The vertical occipital fasciculus (VOF) is a large flat bundle of nerve fibres that forms long-range connections between sub-regions of the visual system at the back of the brain. It was originally discovered by the German neurologist Carl Wernicke, who had by then published his classic studies of stroke patients with language deficits, and was studying neuroanatomy in Theodor Maynert’s laboratory at the University of Vienna. Wernicke saw the VOF in slices of monkey brain, and included it in his 1881 brain atlas, naming it the senkrechte occipitalbündel, or ‘vertical occipital bundle’. Maynert - himself a pioneering neuroanatomist and psychiatrist, whose other students included Sigmund Freud and Sergei Korsakov - refused to accept Wernicke’s discovery, however. He had already described the brain’s white matter tracts, and had arrived at the general principle that they are oriented horizontally, running mostly from front to back within each hemisphere. But the pathway Wernicke had described ran vertically. Another of Maynert’s students, Heinrich Obersteiner, identified the VOF in the human brain, and mentioned it in his 1888 textbook, calling it the senkrechte occipitalbündel in one illustration, and the fasciculus occipitalis perpendicularis in another. So, too, did Heinrich Sachs, a student of Wernicke’s, who labeled it the stratum profundum convexitatis in his 1892 white matter atlas. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20333 - Posted: 11.20.2014

By David Shultz WASHINGTON, D.C.—Reciting the days of the week is a trivial task for most of us, but then, most of us don’t have cooling probes in our brains. Scientists have discovered that by applying a small electrical cooling device to the brain during surgery they could slow down and distort speech patterns in patients. When the probe was activated in some regions of the brain associated with language and talking—like the premotor cortex—the patients’ speech became garbled and distorted, the team reported here yesterday at the Society for Neuroscience’s annual meeting. As scientists moved the probe to other speech regions, such as the pars opercularis, the distortion lessened, but speech patterns slowed. (These zones and their effects are displayed graphically above.) “What emerged was this orderly map,” says team leader Michael Long, a neuroscientist at the New York University School of Medicine in New York City. The results suggest that one region of the brain organizes the rhythm and flow of language while another is responsible for the actual articulation of the words. The team was even able to map which word sounds were most likely to be elongated when the cooling probe was applied. “People preferentially stretched out their vowels,” Long says. “Instead of Tttuesssday, you get Tuuuesdaaay.” The technique is similar to the electrical probe stimulation that researchers have been using to identify the function of various brain regions, but the shocks often trigger epileptic seizures in sensitive patients. Long contends that the cooling probe is completely safe, and that in the future it may help neurosurgeons decide where to cut and where not to cut during surgery. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20328 - Posted: 11.20.2014

by Helen Thomson As you read this, your neurons are firing – that brain activity can now be decoded to reveal the silent words in your head TALKING to yourself used to be a strictly private pastime. That's no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world. "If you're reading text in a newspaper or a book, you hear a voice in your own head," says Brian Pasley at the University of California, Berkeley. "We're trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak." When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words. In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain's temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity aloneMovie Camera (PLoS Biology, doi.org/fzv269). © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20267 - Posted: 11.01.2014

By Virginia Morell Human fetuses are clever students, able to distinguish male from female voices and the voices of their mothers from those of strangers between 32 and 39 weeks after conception. Now, researchers have demonstrated that the embryos of the superb fairy-wren (Malurus cyaneus, pictured), an Australian songbird, also learn to discriminate among the calls they hear. The scientists played 1-minute recordings to 43 fairy-wren eggs collected from nests in the wild. The eggs were between days 9 and 13 of a 13- to 14-day incubation period. The sounds included white noise, a contact call of a winter wren, or a female fairy-wren’s incubation call. Those embryos that listened to the fairy-wrens’ incubation calls and the contact calls of the winter wrens lowered their heart rates, a sign that they were learning to discriminate between the calls of a different species and those of their own kind, the researchers report online today in the Proceedings of the Royal Society B. (None showed this response to the white noise.) Thus, even before hatching, these small birds’ brains are engaged in tasks requiring attention, learning, and possibly memory—the first time embryonic learning has been seen outside humans, the scientists say. The behavior is key because fairy-wren embryos must learn a password from their mothers’ incubation calls; otherwise, they’re less successful at soliciting food from their parents after hatching. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20254 - Posted: 10.29.2014

By Virginia Morell Two years ago, scientists showed that dolphins imitate the sounds of whales. Now, it seems, whales have returned the favor. Researchers analyzed the vocal repertoires of 10 captive orcas (Orcinus orca), three of which lived with bottlenose dolphins (Tursiops truncatus) and the rest with their own kind. Of the 1551 vocalizations these seven latter orcas made, more than 95% were the typical pulsed calls of killer whales. In contrast, the three orcas that had only dolphins as pals busily whistled and emitted dolphinlike click trains and terminal buzzes, the scientists report in the October issue of The Journal of the Acoustical Society of America. (Watch a video as bioacoustician and co-author Ann Bowles describes the difference between killer whale and orca whistles.) The findings make orcas one of the few species of animals that, like humans, is capable of vocal learning—a talent considered a key underpinning of language. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20173 - Posted: 10.08.2014

By Meeri Kim From ultrasonic bat chirps to eerie whale songs, the animal kingdom is a noisy place. While some sounds might have meaning — typically something like “I'm a male, aren't I great?” — no other creatures have a true language except for us. Or do they? A new study on animal calls has found that the patterns of barks, whistles, and clicks from seven different species appear to be more complex than previously thought. The researchers used mathematical tests to see how well the sequences of sounds fit to models ranging in complexity. In fact, five species including the killer whale and free-tailed bat had communication behaviors that were definitively more language-like than random. The study was published online Wednesday in the Proceedings of the Royal Society B. “We're still a very, very long way from understanding this transition from animal communication to human language, and it's a huge mystery at the moment,” said study author and zoologist Arik Kershenbaum, who did the work at the National Institute for Mathematical and Biological Synthesis. “These types of mathematical analyses can give us some clues.” While the most complicated mathematical models come closer to our own speech patterns, the simple models — called Markov processes — are more random and have been historically thought to fit animal calls. “A Markov process is where you have a sequence of numbers or letters or notes, and the probability of any particular note depends only on the few notes that have come before,” said Kershenbaum. So the next note could depend on the last two or 10 notes before it, but there is a defined window of history that can be used to predict what happens next. “What makes human language special is that there's no finite limit as to what comes next,” he said.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19987 - Posted: 08.22.2014

By Jane C. Hu Last week, people around the world mourned the death of beloved actor and comedian Robin Williams. According to the Gorilla Foundation in Woodside, California, we were not the only primates mourning. A press release from the foundation announced that Koko the gorilla—the main subject of its research on ape language ability, capable in sign language and a celebrity in her own right—“was quiet and looked very thoughtful” when she heard about Williams’ death, and later became “somber” as the news sank in. Williams, described in the press release as one of Koko’s “closest friends,” spent an afternoon with the gorilla in 2001. The foundation released a video showing the two laughing and tickling one another. At one point, Koko lifts up Williams’ shirt to touch his bare chest. In another scene, Koko steals Williams’ glasses and wears them around her trailer. These clips resonated with people. In the days after Williams’ death, the video amassed more than 3 million views. Many viewers were charmed and touched to learn that a gorilla forged a bond with a celebrity in just an afternoon and, 13 years later, not only remembered him and understood the finality of his death, but grieved. The foundation hailed the relationship as a triumph over “interspecies boundaries,” and the story was covered in outlets from BuzzFeed to the New York Post to Slate. The story is a prime example of selective interpretation, a critique that has plagued ape language research since its first experiments. Was Koko really mourning Robin Williams? How much are we projecting ourselves onto her and what are we reading into her behaviors? Animals perceive the emotions of the humans around them, and the anecdotes in the release could easily be evidence that Koko was responding to the sadness she sensed in her human caregivers. © 2014 The Slate Group LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19986 - Posted: 08.22.2014

By Victoria Gill Science reporter, BBC News Very mobile ears help many animals direct their attention to the rustle of a possible predator. But a study in horses suggests they also pay close attention to the direction another's ears are pointing in order to work out what they are thinking. Researchers from the University of Sussex say these swivelling ears have become a useful communication tool. Their findings are published in the journal Current Biology. The research team studies animal behaviour to build up a picture of how communication and social skills evolved. "We're interested in how [they] communicate," said lead researcher Jennifer Wathan. "And being sensitive to what another individual is thinking is a fundamental skill from which other [more complex] skills develop." Ms Wathan and her colleague Prof Karen McComb set up a behavioural experiment where 72 individual horses had to use visual cues from another horse in order to choose where to feed. They led each horse to a point where it had to select one of two buckets. On a wall behind this decision-making spot was a life-sized photograph of a horse's head facing either to left or right. In some of the trials, the horses ears or eyes were covered. Horse images used in a study of horse communication The ears have it: Horses in the test followed the gaze of another horse, and the direction its ears pointed If the ears and eyes of the horse in the picture were visible, the horses being tested would choose the bucket towards which its gaze - and its ears - were directed. If the horse in the picture had either its eyes or its ears covered, the horse being tested would just choose a feed bucket at random. Like many mammals that are hunted by predators, horses can rotate their ears through almost 180 degrees - but Ms Wathan said that in our "human-centric" view of the world, we had overlooked the importance of these very mobile ears in animal communication. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19914 - Posted: 08.05.2014

Nishad Karim African penguins communicate feelings such as hunger, anger and loneliness through six distinctive vocal calls, according to scientists who have observed the birds' behaviour in captivity. The calls of the "jackass" penguin were identified by researchers at the University of Turin, Italy. Four are exclusive to adults and two are exclusive to juveniles and chicks. The study, led by Dr Livio Favaro, found that adult penguins produce distinctive short calls to express their isolation from groups or their mates, known as "contact" calls, or to show aggression during fights or confrontations, known as "agonistic" calls. They also observed an "ecstatic display song", sung by single birds during the mating season and the "mutual display song", a custom duet sung by nesting partners to each other. Juveniles and chicks produce calls relating to hunger. "There are two begging calls; the first one is where chicks utter 'begging peeps', short cheeps when they want food from adults, and the second one we've called 'begging moan', which is uttered by juveniles when they're out of the nest, but still need food from adults," said Favaro. The team made simultaneous video and audio recordings of 48 captive African penguins at the zoo Zoom Torino, over a 104 non-consecutive days. They then compared the audio recordings with the video footage of the birds' behaviour. Additional techniques, including visual inspection of spectrographs, produced statistical and quantifiable results. The research is published in the journal PLOS One. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19905 - Posted: 07.31.2014

By PAUL VITELLO The conventional wisdom among animal scientists in the 1950s was that birds were genetically programmed to sing, that monkeys made noise to vent their emotions, and that animal communication, in general, was less like human conversation than like a bodily function. Then Peter Marler, a British-born animal behaviorist, showed that certain songbirds not only learned their songs, but also learned to sing in a dialect peculiar to the region in which they were born. And that a vervet monkey made one noise to warn its troop of an approaching leopard, another to report the sighting of an eagle, and a third to alert the group to a python on the forest floor. These and other discoveries by Dr. Marler, who died July 5 in Winters, Calif., at 86, heralded a sea change in the study of animal intelligence. At a time when animal behavior was seen as a set of instinctive, almost robotic responses to environmental stimuli, he was one of the first scientists to embrace the possibility that some animals, like humans, were capable of learning and transmitting their knowledge to other members of their species. His hypothesis attracted a legion of new researchers in ethology, as animal behavior research is also known, and continues to influence thinking about cognition. Dr. Marler, who made his most enduring contributions in the field of birdsong, wrote more than a hundred papers during a long career that began at Cambridge University, where he received his Ph.D. in zoology in 1954 (the second of his two Ph.D.s.), and that took him around the world conducting field research while teaching at a succession of American universities. Dr. Marler taught at the University of California, Berkeley, from 1957 to 1966; at Rockefeller University in New York from 1966 to 1989; and at the University of California, Davis, where he led animal behavior research, from 1989 to 1994. He was an emeritus professor there at his death. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 19885 - Posted: 07.28.2014

By Meeri Kim Babies start with simple vowel sounds — oohs and aahs. A mere months later, the cooing turns into babbling — “bababa” — showing off a newfound grasp of consonants. A new study has found that a key part of the brain involved in forming speech is firing away in babies as they listen to voices around them. This may represent a sort of mental rehearsal leading up to the true milestone that occurs after only a year of life: baby’s first words. Any parent knows how fast babies learn how to comprehend and use language. The skill develops so rapidly and seemingly without much effort, but how do they do it? Researchers at the University of Washington are a step closer to unraveling the mystery of how babies learn how to speak. They had a group of 7- and 11-month-old infants listen to a series of syllables while sitting in a brain scanner. Not only did the auditory areas of their brains light up as expected but so did a region crucial to forming higher-level speech, called Broca’s area. A year-old baby sits in a brain scanner, called magnetoencephalography -- a noninvasive approach to measuring brain activity. The baby listens to speech sounds like "da" and "ta" played over headphones while researchers record her brain responses. (Institute for Learning and Brain Sciences, University of Washington) These findings may suggest that even before babies utter their first words, they may be mentally exercising the pivotal parts of their brains in preparation. Study author and neuroscientist Patricia Kuhl says that her results reinforce the belief that talking and reading to babies from birth is beneficial for their language development, along with exaggerated speech and mouth movements (“Hiii cuuutie! How are youuuuu?”). © 1996-2014 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19858 - Posted: 07.21.2014

By Helen Briggs Health editor, BBC News website The same genes drive maths and reading ability, research suggests. Around half of the genes that influence a child's aptitude for reading also play a role in how easily they learn maths, say scientists. The study of 12-year-old British twins from 3,000 families, reported in Nature Communications, adds to the debate about the role of genes in education. An education expert said the work had little relevance for public policy as specific genes had not been identified. Past research suggests both nature and nurture have a similar impact on how children perform in exams. One study found genes explained almost 60% of the variation in GCSE exam results. However, little is known about which genes are involved and how they interact. The new research suggests a substantial overlap between the genetic variations that influence mathematics and reading, say scientists from UCL, the University of Oxford and King's College London. But non-genetic factors - such as parents, schools and teachers - are also important, said Prof Robert Plomin of King's College London, who worked on the study. "The study does not point to specific genes linked to literacy or numeracy, but rather suggests that genetic influence on complex traits, like learning abilities, and common disorders, like learning disabilities, is caused by many genes of very small-effect size," he said. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19808 - Posted: 07.09.2014

Learning a second language benefits the brain in ways that can pay off later in life, suggests a deepening field of research that specializes in the relationship between bilingualism and cognition. In one large Scottish test, researchers discovered archival data on 835 native speakers of English who were born in Edinburgh in 1936. The participants had been given an intelligence test at age 11 as part of standard British educational policy and many were retested in their early 70s. Those who spoke two or more languages had significantly better cognitive abilities on certain tasks compared with what would be expected from their IQ test scores at age 11, Dr. Thomas Bak of the Centre for Cognitive Aging and Cognitive Epidemiology at the University of Edinburgh reported in the journal Annals of Neurology. "Our results suggest a protective effect of bilingualism against age-related cognitive decline," independently of IQ, Bak and his co-authors concluded. It was a watershed study in 1962 by Elizabeth Peal and Wallace Lambert at McGill University in Montreal that turned conventional thinking on bilingualism on its head and set the rationale for French immersion in Canada. Psychologists at York University in Toronto have also been studying the effect of bilingualism on the brain across the lifespan, including dementia. They’ve learned how people who speak a second language outperform those with just one on tasks that tap executive function such as attention, selection and inhibition. Those are the high-level cognitive processes we use to multitask as we drive on the highway and juggle remembering the exit and monitoring our speed without getting distracted by billboards. © CBC 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19781 - Posted: 07.02.2014