Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 492

By PAUL VITELLO The conventional wisdom among animal scientists in the 1950s was that birds were genetically programmed to sing, that monkeys made noise to vent their emotions, and that animal communication, in general, was less like human conversation than like a bodily function. Then Peter Marler, a British-born animal behaviorist, showed that certain songbirds not only learned their songs, but also learned to sing in a dialect peculiar to the region in which they were born. And that a vervet monkey made one noise to warn its troop of an approaching leopard, another to report the sighting of an eagle, and a third to alert the group to a python on the forest floor. These and other discoveries by Dr. Marler, who died July 5 in Winters, Calif., at 86, heralded a sea change in the study of animal intelligence. At a time when animal behavior was seen as a set of instinctive, almost robotic responses to environmental stimuli, he was one of the first scientists to embrace the possibility that some animals, like humans, were capable of learning and transmitting their knowledge to other members of their species. His hypothesis attracted a legion of new researchers in ethology, as animal behavior research is also known, and continues to influence thinking about cognition. Dr. Marler, who made his most enduring contributions in the field of birdsong, wrote more than a hundred papers during a long career that began at Cambridge University, where he received his Ph.D. in zoology in 1954 (the second of his two Ph.D.s.), and that took him around the world conducting field research while teaching at a succession of American universities. Dr. Marler taught at the University of California, Berkeley, from 1957 to 1966; at Rockefeller University in New York from 1966 to 1989; and at the University of California, Davis, where he led animal behavior research, from 1989 to 1994. He was an emeritus professor there at his death. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 19885 - Posted: 07.28.2014

By Meeri Kim Babies start with simple vowel sounds — oohs and aahs. A mere months later, the cooing turns into babbling — “bababa” — showing off a newfound grasp of consonants. A new study has found that a key part of the brain involved in forming speech is firing away in babies as they listen to voices around them. This may represent a sort of mental rehearsal leading up to the true milestone that occurs after only a year of life: baby’s first words. Any parent knows how fast babies learn how to comprehend and use language. The skill develops so rapidly and seemingly without much effort, but how do they do it? Researchers at the University of Washington are a step closer to unraveling the mystery of how babies learn how to speak. They had a group of 7- and 11-month-old infants listen to a series of syllables while sitting in a brain scanner. Not only did the auditory areas of their brains light up as expected but so did a region crucial to forming higher-level speech, called Broca’s area. A year-old baby sits in a brain scanner, called magnetoencephalography -- a noninvasive approach to measuring brain activity. The baby listens to speech sounds like "da" and "ta" played over headphones while researchers record her brain responses. (Institute for Learning and Brain Sciences, University of Washington) These findings may suggest that even before babies utter their first words, they may be mentally exercising the pivotal parts of their brains in preparation. Study author and neuroscientist Patricia Kuhl says that her results reinforce the belief that talking and reading to babies from birth is beneficial for their language development, along with exaggerated speech and mouth movements (“Hiii cuuutie! How are youuuuu?”). © 1996-2014 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19858 - Posted: 07.21.2014

By Helen Briggs Health editor, BBC News website The same genes drive maths and reading ability, research suggests. Around half of the genes that influence a child's aptitude for reading also play a role in how easily they learn maths, say scientists. The study of 12-year-old British twins from 3,000 families, reported in Nature Communications, adds to the debate about the role of genes in education. An education expert said the work had little relevance for public policy as specific genes had not been identified. Past research suggests both nature and nurture have a similar impact on how children perform in exams. One study found genes explained almost 60% of the variation in GCSE exam results. However, little is known about which genes are involved and how they interact. The new research suggests a substantial overlap between the genetic variations that influence mathematics and reading, say scientists from UCL, the University of Oxford and King's College London. But non-genetic factors - such as parents, schools and teachers - are also important, said Prof Robert Plomin of King's College London, who worked on the study. "The study does not point to specific genes linked to literacy or numeracy, but rather suggests that genetic influence on complex traits, like learning abilities, and common disorders, like learning disabilities, is caused by many genes of very small-effect size," he said. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19808 - Posted: 07.09.2014

Learning a second language benefits the brain in ways that can pay off later in life, suggests a deepening field of research that specializes in the relationship between bilingualism and cognition. In one large Scottish test, researchers discovered archival data on 835 native speakers of English who were born in Edinburgh in 1936. The participants had been given an intelligence test at age 11 as part of standard British educational policy and many were retested in their early 70s. Those who spoke two or more languages had significantly better cognitive abilities on certain tasks compared with what would be expected from their IQ test scores at age 11, Dr. Thomas Bak of the Centre for Cognitive Aging and Cognitive Epidemiology at the University of Edinburgh reported in the journal Annals of Neurology. "Our results suggest a protective effect of bilingualism against age-related cognitive decline," independently of IQ, Bak and his co-authors concluded. It was a watershed study in 1962 by Elizabeth Peal and Wallace Lambert at McGill University in Montreal that turned conventional thinking on bilingualism on its head and set the rationale for French immersion in Canada. Psychologists at York University in Toronto have also been studying the effect of bilingualism on the brain across the lifespan, including dementia. They’ve learned how people who speak a second language outperform those with just one on tasks that tap executive function such as attention, selection and inhibition. Those are the high-level cognitive processes we use to multitask as we drive on the highway and juggle remembering the exit and monitoring our speed without getting distracted by billboards. © CBC 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19781 - Posted: 07.02.2014

Carl Zimmer A novelist scrawling away in a notebook in seclusion may not seem to have much in common with an NBA player doing a reverse layup on a basketball court before a screaming crowd. But if you could peer inside their heads, you might see some striking similarities in how their brains were churning. That’s one of the implications of new research on the neuroscience of creative writing. For the first time, neuroscientists have used fMRI scanners to track the brain activity of both experienced and novice writers as they sat down — or, in this case, lay down — to turn out a piece of fiction. The researchers, led by Martin Lotze of the University of Greifswald in Germany, observed a broad network of regions in the brain working together as people produced their stories. But there were notable differences between the two groups of subjects. The inner workings of the professionally trained writers in the bunch, the scientists argue, showed some similarities to people who are skilled at other complex actions, like music or sports. The research is drawing strong reactions. Some experts praise it as an important advance in understanding writing and creativity, while others criticize the research as too crude to reveal anything meaningful about the mysteries of literature or inspiration. Dr. Lotze has long been intrigued by artistic expression. In previous studies, he has observed the brains of piano players and opera singers, using fMRI scanners to pinpoint regions that become unusually active in the brain. Needless to say, that can be challenging when a subject is singing an aria. Scanners are a lot like 19th-century cameras: They can take very sharp pictures, if their subject remains still. To get accurate data, Dr. Lotze has developed software that can take into account fluctuations caused by breathing or head movements. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19756 - Posted: 06.21.2014

—By Indre Viskontas He might be fictional. But the gigantic Hodor, a character in the blockbuster Game of Thrones series, nonetheless sheds light on something very much in the realm of fact: how our ability to speak emerges from a complex ball of neurons, and how certain brain-damaged patients can lose very specific aspects of that ability. According to George R.R. Martin, who wrote the epic books that inspired the HBO show, the 7-foot-tall Hodor could only say one word—"Hodor"—and everyone therefore tended to assume that was his name. Here's one passage about Hodor from the first novel in Martin's series: Theon Greyjoy had once commented that Hodor did not know much, but no one could doubt that he knew his name. Old Nan had cackled like a hen when Bran told her that, and confessed that Hodor's real name was Walder. No one knew where "Hodor" had come from, she said, but when he started saying it, they started calling him by it. It was the only word he had. Yet it's clear that Hodor can understand much more than he can say; he's able to follow instructions, anticipate who needed help, and behave in socially appropriate ways (mostly). Moreover, he says this one word in many different ways, implying very different meanings: So what might be going on in Hodor's brain? Hodor's combination of impoverished speech production with relatively normal comprehension is a classic, albeit particularly severe, presentation of expressive aphasia, a neurological condition usually caused by a localized stroke in the front of the brain, on the left side. Some patients, however, have damage to that part of the brain from other causes, such as a tumor, or a blow to the head. ©2014 Mother Jones

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19753 - Posted: 06.21.2014

by Bethany Brookshire Human vocal chords can produce an astonishing array of sounds: shrill and fearful, low and sultry, light and breathy, loud and firm. The slabs of muscle in our throat make the commanding sound of a powerful bass and a baby’s delightful, gurgling laugh. There are voices that must be taken seriously, voices that play and voices that seduce. And then there’s vocal fry. Bringing to mind celebrity voices like Kim Kardashian or Zooey Deschanel, vocal fry is a result of pushing the end of words and sentences into the lowest vocal register. When forcing the voice low, the vocal folds in the throat vibrate irregularly, allowing air to slip through. The result is a low, sizzling rattle underneath the tone. Recent studies have documented growing popularity of vocal fry among young women in the United States. But popular sizzle in women’s speech might be frying their job prospects, a new study reports. The findings suggest that people with this vocal affectation might want to hold the fry on the job market — and that people on the hiring side of the table might want to examine their biases. Vocal fry has been recognized since the 1970s, but now it’s thought of as a fad. Study coauthor Casey Klofstad, a political scientist at the University of Miami in Goral Gables, Fla., says that the media attention surrounding vocal fry generated a lot of speculation. “It is a good thing? Is it bad? It gave us a clear question we could test,” he says. Specifically, they wanted to study whether vocal fry had positive or negative effects on how people who used the technique were perceived. Led by Rindy Anderson from Duke University, the researchers recorded seven young men and seven young women speaking the phrase “Thank you for considering me for this opportunity.” Each person spoke the phrase twice, once with vocal fry and once without. Then the authors played the recordings to 800 participants ages 18 to 65, asking them to make judgments about the candidates based on voice alone. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 19718 - Posted: 06.10.2014

Learning a second language can have a positive effect on the brain, even if it is taken up in adulthood, a University of Edinburgh study suggests. Researchers found that reading, verbal fluency and intelligence were improved in a study of 262 people tested either aged 11 or in their seventies. A previous study suggested that being bilingual could delay the onset of dementia by several years. The study is published in Annals of Neurology. The big question in this study was whether learning a new language improved cognitive functions or whether individuals with better cognitive abilities were more likely to become bilingual. Dr Thomas Bak, from the Centre for Cognitive Ageing and Cognitive Epidemiology at the University of Edinburgh, said he believed he had found the answer. Using data from intelligence tests on 262 Edinburgh-born individuals at the age of 11, the study looked at how their cognitive abilities had changed when they were tested again in their seventies. The research was conducted between 2008 and 2010. All participants said they were able to communicate in at least one language other than English. Of that group, 195 learned the second language before the age of 18, and 65 learned it after that time. The findings indicate that those who spoke two or more languages had significantly better cognitive abilities compared to what would have been expected from their baseline test. The strongest effects were seen in general intelligence and reading. The effects were present in those who learned their second language early, as well as later in life. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19679 - Posted: 06.02.2014

Brian Owens If you think you know what you just said, think again. People can be tricked into believing they have just said something they did not, researchers report this week. The dominant model of how speech works is that it is planned in advance — speakers begin with a conscious idea of exactly what they are going to say. But some researchers think that speech is not entirely planned, and that people know what they are saying in part through hearing themselves speak. So cognitive scientist Andreas Lind and his colleagues at Lund University in Sweden wanted to see what would happen if someone said one word, but heard themselves saying another. “If we use auditory feedback to compare what we say with a well-specified intention, then any mismatch should be quickly detected,” he says. “But if the feedback is instead a powerful factor in a dynamic, interpretative process, then the manipulation could go undetected.” In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said. Lind chose the words ‘grey’ and ‘green’ (grå and grön in Swedish) to switch, as they sound similar but have different meanings. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 19562 - Posted: 05.03.2014

Does reading faster mean reading better? That’s what speed-reading apps claim, promising to boost not just the number of words you read per minute, but also how well you understand a text. There’s just one problem: The same thing that speeds up reading actually gets in the way of comprehension, according to a new study. When you read at your natural pace, your eyes move back and forth across a sentence, rather than plowing straight through to the end. Apps like Spritz or the aptly named Speed Read are built around the idea that these eye movements, called saccades, are a redundant waste of time. It’s more efficient, their designers claim, to present words one at a time in a fixed spot on a screen, discouraging saccades and helping you get through a text more quickly. This method, called rapid serial visual presentation (RSVP), has been controversial since the 1980s, when tests showed it impaired comprehension, though researchers weren’t quite sure why. With a new crop of speed-reading products on the market, psychologists decided to dig a bit more and uncovered a simple explanation for RSVP’s flaw: Every so often, we need to scan backward and reread for a better grasp of the material. Researchers demonstrated that need by presenting 40 college students with ambiguous, unpunctuated sentences ("While the man drank the water that was clear and cold overflowed from the toilet”) while following their subjects’ gaze with an eye-tracking camera. Half the time, the team crossed out words participants had already read, preventing them from rereading (“xxxxx xxx xxx drank the water …”). Following up with basic yes-no questions about each sentence’s content, they found that comprehension dropped by about 25% in trials that blocked rereading versus those that didn’t, the researchers report online this month in Psychological Science. Crucially, the drop was about the same when subjects could, but simply hadn’t, reread parts of a sentence. Nor did the results differ much when using ambiguous sentences or their less confusing counterparts (“While the man slept the water …”). Turns out rereading isn’t a waste of time—it’s essential for understanding. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Language and Our Divided Brain
Link ID: 19534 - Posted: 04.26.2014

I am a sociologist by training. I come from academic world, reading scholarly articles on topics of social import, but they're almost always boring, dry and quickly forgotten. Yet I can't count how many times I've gone to a movie, a theater production or read a novel and been jarred into seeing something differently, learned something new, felt deep emotions and retained the insights gained. I know from both my research and casual conversations with people in daily life that my experiences are echoed by many. The arts can tap into issues that are otherwise out of reach and reach people in meaningful ways. This realization brought me to arts-based research (ABR). Arts-based research is an emergent paradigm whereby researchers across the disciplines adapt the tenets of the creative arts in their social research projects. Arts-based research, a term first coined by Eliot Eisner at Stanford University in the early 90s, is based on the assumption that art can teach us in ways that other forms cannot. Scholars can take interview or survey research, for instance, and represent it through art. I've written two novels based on sociological interview research. Sometimes researchers use the arts during data collection, involving research participants in the art-making process, such as drawing their response to a prompt rather than speaking. The turn by many scholars to arts-based research is most simply explained by my opening example of comparing the experience of consuming jargon-filled and inaccessible academic articles to that of experiencing artistic works. While most people know on some level that the arts can reach and move us in unique ways, there is actually science behind this. ©2014 TheHuffingtonPost.com, Inc

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 7: Vision: From Eye to Brain
Link ID: 19528 - Posted: 04.24.2014

By Daisy Yuhas Greetings from Boston where the 21st annual meeting of the Cognitive Neuroscience Society is underway. Saturday and Sunday were packed with symposia, lectures and more than 400 posters. Here are just a few of the highlights. The bilingual brain has been a hot topic at the meeting this year, particularly as researchers grapple with the benefits and challenges of language learning. In news that will make many college language majors happy, a group of researchers led by Harriet Wood Bowden of the University of Tennessee-Knoxville have demonstrated that years of language study alter a person’s brain processing to be more like a native speaker’s brain. They found that native English speaking students with about seven semesters of study in Spanish show very similar brain activation to native speakers when processing spoken Spanish grammar. The study used electroencephalography, or EEG, in which electrodes are placed along the scalp to pick up and measure the electrical activity of neurons in the brain below. By contrast, students who have more recently begun studying Spanish show markedly different processing of these elements of the language. The study focused on the recognition of noun-adjective agreement, particularly in gender and number. Accents, however, can remain harder to master. Columbia University researchers worked with native Spanish speakers to study the difficulties encountered in hearing and reproducing English vowel sounds that are not used in Spanish. The research focused on the distinction between the extended o sound in “dock” and the soft u sound in “duck,” which is not part of spoken Spanish. The scientists used electroencephalograms to measure the brain responses to these vowel sounds in native-English and native-Spanish speakers. © 2014 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19467 - Posted: 04.10.2014

by Bob Holmes People instinctively organise a new language according to a logical hierarchy, not simply by learning which words go together, as computer translation programs do. The finding may add further support to the notion that humans possess a "universal grammar", or innate capacity for language. The existence of a universal grammar has been in hot dispute among linguists ever since Noam Chomsky first proposed the idea half a century ago. If the theory is correct, this innate structure should leave some trace in the way people learn languages. To test the idea, Jennifer Culbertson, a linguist at George Mason University in Fairfax, Virginia, and her colleague David Adger of Queen Mary University of London, constructed an artificial "nanolanguage". They presented English-speaking volunteers with two-word phrases, such as "shoes blue" and "shoes two", which were supposed to belong to a new language somewhat like English. They then asked the volunteers to choose whether "shoes two blue" or "shoes blue two" would be the correct three-word phrase. In making this choice, the volunteers – who hadn't been exposed to any three-word phrases – would reveal their innate bias in language-learning. Would they rely on familiarity ("two" usually precedes "blue" in English), or would they follow a semantic hierarchy and put "blue" next to "shoe" (because it modifies the noun more tightly than "two", which merely counts how many)? © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19433 - Posted: 04.01.2014

Imagine you’re calling a stranger—a possible employer, or someone you’ve admired from a distance—on the telephone for the first time. You want to make a good impression, and you’ve rehearsed your opening lines. What you probably don’t realize is that the person you’re calling is going to size you up the moment you utter “hello.” Psychologists have discovered that the simple, two-syllable sound carries enough information for listeners to draw conclusions about the speaker’s personality, such as how trustworthy he or she is. The discovery may help improve computer-generated and voice-activated technologies, experts say. “They’ve confirmed that people do make snap judgments when they hear someone’s voice,” says Drew Rendall, a psychologist at the University of Lethbridge in Canada. “And the judgments are made on very slim evidence.” Psychologists have shown that we can determine a great deal about someone’s personality by listening to them. But these researchers looked at what others hear in someone’s voice when listening to a lengthy speech, says Phil McAleer, a psychologist at the University of Glasgow in the United Kingdom and the lead author of the new study. No one had looked at how short a sentence we need to hear before making an assessment, although other studies had shown that we make quick judgments about people’s personalities from a first glance at their faces. “You can pick up clues about how dominant and trustworthy someone is within the first few minutes of meeting a stranger, based on visual cues,” McAleer says. To find out if there is similar information in a person’s voice, he and his colleagues decided to test “one of the quickest and shortest of sociable words, ‘Hello.’ ” © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 15: Emotions, Aggression, and Stress; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 15: Language and Our Divided Brain
Link ID: 19358 - Posted: 03.13.2014

by Aviva Rutkin "He moistened his lips uneasily." It sounds like a cheap romance novel, but this line is actually lifted from quite a different type of prose: a neuroscience study. Along with other sentences, including "Have you got enough blankets?" and "And what eyes they were", it was used to build the first map of how the brain processes the building blocks of speech – distinct units of sound known as phonemes. The map reveals that the brain devotes distinct areas to processing different types of phonemes. It might one day help efforts to read off what someone is hearing from a brain scan. "If you could see the brain of someone who is listening to speech, there is a rapid activation of different areas, each responding specifically to a particular feature the speaker is producing," says Nima Mesgarani, an electrical engineer at Columbia University in New York City. Snakes on a brain To build the map, Mesgarani's team turned to a group of volunteers who already had electrodes implanted in their brains as part of an unrelated treatment for epilepsy. The invasive electrodes sit directly on the surface of the brain, providing a unique and detailed view of neural activity. The researchers got the volunteers to listen to hundreds of snippets of speech taken from a database designed to provide an efficient way to cycle through a wide variety of phonemes, while monitoring the signals from the electrodes. As well as those already mentioned, sentences ran the gamut from "It had gone like clockwork" to "Junior, what on Earth's the matter with you?" to "Nobody likes snakes". © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19196 - Posted: 02.01.2014

by Laura Sanders Growing up, I loved it when my parents read aloud the stories of the Berenstain Bears living in their treehouse. So while I was pregnant with my daughter, I imagined lots of cuddly quiet time with her in a comfy chair, reading about the latest adventures of Brother and Sister. Of course, reality soon let me know just how ridiculous that idea was. My newborn couldn’t see more than a foot away, cried robustly and frequently for mysterious reasons, and didn’t really understand words yet. Baby V was simply not interested in the latest dispatch from Bear County. When I started reading child development expert Elaine Reese’s new book Tell Me a Story, I realized that I was not the only one with idyllic story time dreams. Babies and toddlers are squirmy, active people with short attention spans. “Why, then, do we cling to this soft-focus view of storytelling when we know it is unrealistic?” she writes. These days, as Baby V closes in on the 1-year mark, she has turned into a most definite book lover. But it’s not the stories that enchant her. It’s holding the book, turning its pages back to front to back again, flipping it over and generally showing it who’s in charge. Every so often I can entice Baby V to sit on my lap with a book, but we never read through a full story. Instead, we linger on the page with all the junk food that the Hungry Caterpillar chomps through, sticking our fingers in the little holes in the pages. And we make Froggy pop in and out of the bucket. And we study the little goats as they climb up and up and up on the hay bales. © Society for Science & the Public 2000 - 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19155 - Posted: 01.21.2014

by Laura Sanders Baby V sailed through her first Christmas with the heart of a little explorer. She travelled to frigid upstate New York where she mashed snow in her cold little hands, tasted her great grandma’s twice baked potato and was licked clean by at least four dogs. And she opened lots of presents. It’s totally true what people say about little kids and gifts: The wrapping paper proved to be the biggest hit. But in the Christmas aftermath, one of Baby V’s new toys really caught her attention. She cannot resist her singing, talking book. The book has only three pages, but Baby V is smitten. Any time the book pipes up, which it seems to do randomly, she snaps to attention, staring at it, grabbing it and trying to figure it out. With a cutesy high-pitched voice, the book tells Baby V to “Turn the pa-AYE-ge!” and “This is fun!” Sometimes, the book bursts into little songs, all the while maintaining the cheeriest, squeakiest, sugarplum-drenched tone, even when it’s saying something kind of sad: “Three little kittens have lost their mittens and they began to cry!” The book maker (uh, author?) clearly knows how to tap into infants’ deep love for happy, squeaky noises, as does the creator of Elmo. Scientists are also noticing this trend, and are starting to figure out exactly why these sounds are so alluring to little ones. © Society for Science & the Public 2000 - 2014.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19111 - Posted: 01.08.2014

By Janelle Weaver Children with a large vocabulary experience more success at school and in the workplace. How much parents talk to their children plays a major role, but new research shows that it is not just the quantity but also the quality of parental input that matters. Helpful gestures and meaningful glances may allow kids to grasp concepts more easily than they otherwise would. In a study published in June in the Proceedings of the National Academy of Sciences USA, Erica Cartmill of the University of Chicago and her collaborators videotaped parents in their homes as they read books and played games with their 14- or 18-month-old children. The researchers created hundreds of 40-second muted video clips of these interactions. Another set of study participants watched the videos and used clues from the scenes to guess which nouns the parents were saying at various points in the sequences. The researchers used the accuracy of these guesses to rate how well a parent used nonverbal cues, such as gesturing toward and looking at objects, to clarify a word's meaning. Cartmill and her team found that the quality of parents' nonverbal signaling predicted the size of their children's vocabulary three years later. Surprisingly, socioeconomic status did not play a role in the quality of the parents' nonverbal signaling. This result suggests that the well-known differences in children's vocabulary size across income levels are likely the result of how much parents talk to their children, which is known to differ by income, rather than how much nonverbal help they offer during those interactions. © 2013 Scientific American

Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Language and Our Divided Brain
Link ID: 19028 - Posted: 12.12.2013

Elizabeth Pennisi Speak easy. The language gene FOXP2 may work through a protein partner that stimulates the formation of excitatory connections (green) in nerve cells (magenta). Few genes have made the headlines as much as FOXP2. The first gene associated with language disorders, it was later implicated in the evolution of human speech. Girls make more of the FOXP2 protein, which may help explain their precociousness in learning to talk. Now, neuroscientists have figured out how one of its molecular partners helps Foxp2 exert its effects. The findings may eventually lead to new therapies for inherited speech disorders, says Richard Huganir, the neurobiologist at Johns Hopkins University School of Medicine in Baltimore, Maryland, who led the work. Foxp2 controls the activity of a gene called Srpx2, he notes, which helps some of the brain's nerve cells beef up their connections to other nerve cells. By establishing what SRPX2 does, researchers can look for defective copies of it in people suffering from problems talking or learning to talk. Until 2001, scientists were not sure how genes influenced language. Then Simon Fisher, a neurogeneticist now at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, and his colleagues fingered FOXP2 as the culprit in a family with several members who had trouble with pronunciation, putting words together, and understanding speech. These people cannot move their tongue and lips precisely enough to talk clearly, so even family members often can’t figure out what they are saying. It “opened a molecular window on the neural basis of speech and language,” Fisher says. © 2013 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18865 - Posted: 11.02.2013

Daniel Cossins It may not always seem like it, but humans usually take turns speaking. Research published today in Current Biology1 shows that marmosets, too, wait for each other to stop calling before they respond during extended vocal exchanges. The discovery could help to explain how humans came to be such polite conversationalists. Taking turns is a cornerstone of human verbal communication, and is common across all languages. But with no evidence that non-human primates 'converse' similarly, it was not clear how such behaviour evolved. The widely accepted explanation, known as the gestural hypothesis, suggests that humans might somehow have taken the neural machinery underlying cooperative manual gestures such as pointing to something to attract another person's attention to it, and applied that to vocalization. Not convinced, a team led by Daniel Takahashi, a neurobiologist at Princeton University in New Jersey, wanted to see whether another primate species is capable of cooperative calling. The researchers turned to common marmosets (Callithrix jacchus) because, like humans, they are prosocial — that is, generally friendly towards each other — and they communicate using vocalizations. After you The team recorded exchanges between pairs of marmosets that could hear but not see each other, and found that the monkeys never called at the same time. Instead, they always waited for roughly 5 seconds after a caller had finished before responding. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18808 - Posted: 10.19.2013