Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 501

Mo Costandi A team of neuroscientists in America say they have rediscovered an important neural pathway that was first described in the late nineteenth century but then mysteriously disappeared from the scientific literature until very recently. In a study published today in Proceedings of the National Academy of Sciences, they confirm that the prominent white matter tract is present in the human brain, and argue that it plays an important and unique role in the processing of visual information. The vertical occipital fasciculus (VOF) is a large flat bundle of nerve fibres that forms long-range connections between sub-regions of the visual system at the back of the brain. It was originally discovered by the German neurologist Carl Wernicke, who had by then published his classic studies of stroke patients with language deficits, and was studying neuroanatomy in Theodor Maynert’s laboratory at the University of Vienna. Wernicke saw the VOF in slices of monkey brain, and included it in his 1881 brain atlas, naming it the senkrechte occipitalbündel, or ‘vertical occipital bundle’. Maynert - himself a pioneering neuroanatomist and psychiatrist, whose other students included Sigmund Freud and Sergei Korsakov - refused to accept Wernicke’s discovery, however. He had already described the brain’s white matter tracts, and had arrived at the general principle that they are oriented horizontally, running mostly from front to back within each hemisphere. But the pathway Wernicke had described ran vertically. Another of Maynert’s students, Heinrich Obersteiner, identified the VOF in the human brain, and mentioned it in his 1888 textbook, calling it the senkrechte occipitalbündel in one illustration, and the fasciculus occipitalis perpendicularis in another. So, too, did Heinrich Sachs, a student of Wernicke’s, who labeled it the stratum profundum convexitatis in his 1892 white matter atlas. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20333 - Posted: 11.20.2014

By David Shultz WASHINGTON, D.C.—Reciting the days of the week is a trivial task for most of us, but then, most of us don’t have cooling probes in our brains. Scientists have discovered that by applying a small electrical cooling device to the brain during surgery they could slow down and distort speech patterns in patients. When the probe was activated in some regions of the brain associated with language and talking—like the premotor cortex—the patients’ speech became garbled and distorted, the team reported here yesterday at the Society for Neuroscience’s annual meeting. As scientists moved the probe to other speech regions, such as the pars opercularis, the distortion lessened, but speech patterns slowed. (These zones and their effects are displayed graphically above.) “What emerged was this orderly map,” says team leader Michael Long, a neuroscientist at the New York University School of Medicine in New York City. The results suggest that one region of the brain organizes the rhythm and flow of language while another is responsible for the actual articulation of the words. The team was even able to map which word sounds were most likely to be elongated when the cooling probe was applied. “People preferentially stretched out their vowels,” Long says. “Instead of Tttuesssday, you get Tuuuesdaaay.” The technique is similar to the electrical probe stimulation that researchers have been using to identify the function of various brain regions, but the shocks often trigger epileptic seizures in sensitive patients. Long contends that the cooling probe is completely safe, and that in the future it may help neurosurgeons decide where to cut and where not to cut during surgery. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20328 - Posted: 11.20.2014

by Helen Thomson As you read this, your neurons are firing – that brain activity can now be decoded to reveal the silent words in your head TALKING to yourself used to be a strictly private pastime. That's no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world. "If you're reading text in a newspaper or a book, you hear a voice in your own head," says Brian Pasley at the University of California, Berkeley. "We're trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak." When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words. In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain's temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity aloneMovie Camera (PLoS Biology, doi.org/fzv269). © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20267 - Posted: 11.01.2014

By Virginia Morell Human fetuses are clever students, able to distinguish male from female voices and the voices of their mothers from those of strangers between 32 and 39 weeks after conception. Now, researchers have demonstrated that the embryos of the superb fairy-wren (Malurus cyaneus, pictured), an Australian songbird, also learn to discriminate among the calls they hear. The scientists played 1-minute recordings to 43 fairy-wren eggs collected from nests in the wild. The eggs were between days 9 and 13 of a 13- to 14-day incubation period. The sounds included white noise, a contact call of a winter wren, or a female fairy-wren’s incubation call. Those embryos that listened to the fairy-wrens’ incubation calls and the contact calls of the winter wrens lowered their heart rates, a sign that they were learning to discriminate between the calls of a different species and those of their own kind, the researchers report online today in the Proceedings of the Royal Society B. (None showed this response to the white noise.) Thus, even before hatching, these small birds’ brains are engaged in tasks requiring attention, learning, and possibly memory—the first time embryonic learning has been seen outside humans, the scientists say. The behavior is key because fairy-wren embryos must learn a password from their mothers’ incubation calls; otherwise, they’re less successful at soliciting food from their parents after hatching. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 20254 - Posted: 10.29.2014

By Virginia Morell Two years ago, scientists showed that dolphins imitate the sounds of whales. Now, it seems, whales have returned the favor. Researchers analyzed the vocal repertoires of 10 captive orcas (Orcinus orca), three of which lived with bottlenose dolphins (Tursiops truncatus) and the rest with their own kind. Of the 1551 vocalizations these seven latter orcas made, more than 95% were the typical pulsed calls of killer whales. In contrast, the three orcas that had only dolphins as pals busily whistled and emitted dolphinlike click trains and terminal buzzes, the scientists report in the October issue of The Journal of the Acoustical Society of America. (Watch a video as bioacoustician and co-author Ann Bowles describes the difference between killer whale and orca whistles.) The findings make orcas one of the few species of animals that, like humans, is capable of vocal learning—a talent considered a key underpinning of language. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 20173 - Posted: 10.08.2014

By Meeri Kim From ultrasonic bat chirps to eerie whale songs, the animal kingdom is a noisy place. While some sounds might have meaning — typically something like “I'm a male, aren't I great?” — no other creatures have a true language except for us. Or do they? A new study on animal calls has found that the patterns of barks, whistles, and clicks from seven different species appear to be more complex than previously thought. The researchers used mathematical tests to see how well the sequences of sounds fit to models ranging in complexity. In fact, five species including the killer whale and free-tailed bat had communication behaviors that were definitively more language-like than random. The study was published online Wednesday in the Proceedings of the Royal Society B. “We're still a very, very long way from understanding this transition from animal communication to human language, and it's a huge mystery at the moment,” said study author and zoologist Arik Kershenbaum, who did the work at the National Institute for Mathematical and Biological Synthesis. “These types of mathematical analyses can give us some clues.” While the most complicated mathematical models come closer to our own speech patterns, the simple models — called Markov processes — are more random and have been historically thought to fit animal calls. “A Markov process is where you have a sequence of numbers or letters or notes, and the probability of any particular note depends only on the few notes that have come before,” said Kershenbaum. So the next note could depend on the last two or 10 notes before it, but there is a defined window of history that can be used to predict what happens next. “What makes human language special is that there's no finite limit as to what comes next,” he said.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19987 - Posted: 08.22.2014

By Jane C. Hu Last week, people around the world mourned the death of beloved actor and comedian Robin Williams. According to the Gorilla Foundation in Woodside, California, we were not the only primates mourning. A press release from the foundation announced that Koko the gorilla—the main subject of its research on ape language ability, capable in sign language and a celebrity in her own right—“was quiet and looked very thoughtful” when she heard about Williams’ death, and later became “somber” as the news sank in. Williams, described in the press release as one of Koko’s “closest friends,” spent an afternoon with the gorilla in 2001. The foundation released a video showing the two laughing and tickling one another. At one point, Koko lifts up Williams’ shirt to touch his bare chest. In another scene, Koko steals Williams’ glasses and wears them around her trailer. These clips resonated with people. In the days after Williams’ death, the video amassed more than 3 million views. Many viewers were charmed and touched to learn that a gorilla forged a bond with a celebrity in just an afternoon and, 13 years later, not only remembered him and understood the finality of his death, but grieved. The foundation hailed the relationship as a triumph over “interspecies boundaries,” and the story was covered in outlets from BuzzFeed to the New York Post to Slate. The story is a prime example of selective interpretation, a critique that has plagued ape language research since its first experiments. Was Koko really mourning Robin Williams? How much are we projecting ourselves onto her and what are we reading into her behaviors? Animals perceive the emotions of the humans around them, and the anecdotes in the release could easily be evidence that Koko was responding to the sadness she sensed in her human caregivers. © 2014 The Slate Group LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19986 - Posted: 08.22.2014

By Victoria Gill Science reporter, BBC News Very mobile ears help many animals direct their attention to the rustle of a possible predator. But a study in horses suggests they also pay close attention to the direction another's ears are pointing in order to work out what they are thinking. Researchers from the University of Sussex say these swivelling ears have become a useful communication tool. Their findings are published in the journal Current Biology. The research team studies animal behaviour to build up a picture of how communication and social skills evolved. "We're interested in how [they] communicate," said lead researcher Jennifer Wathan. "And being sensitive to what another individual is thinking is a fundamental skill from which other [more complex] skills develop." Ms Wathan and her colleague Prof Karen McComb set up a behavioural experiment where 72 individual horses had to use visual cues from another horse in order to choose where to feed. They led each horse to a point where it had to select one of two buckets. On a wall behind this decision-making spot was a life-sized photograph of a horse's head facing either to left or right. In some of the trials, the horses ears or eyes were covered. Horse images used in a study of horse communication The ears have it: Horses in the test followed the gaze of another horse, and the direction its ears pointed If the ears and eyes of the horse in the picture were visible, the horses being tested would choose the bucket towards which its gaze - and its ears - were directed. If the horse in the picture had either its eyes or its ears covered, the horse being tested would just choose a feed bucket at random. Like many mammals that are hunted by predators, horses can rotate their ears through almost 180 degrees - but Ms Wathan said that in our "human-centric" view of the world, we had overlooked the importance of these very mobile ears in animal communication. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19914 - Posted: 08.05.2014

Nishad Karim African penguins communicate feelings such as hunger, anger and loneliness through six distinctive vocal calls, according to scientists who have observed the birds' behaviour in captivity. The calls of the "jackass" penguin were identified by researchers at the University of Turin, Italy. Four are exclusive to adults and two are exclusive to juveniles and chicks. The study, led by Dr Livio Favaro, found that adult penguins produce distinctive short calls to express their isolation from groups or their mates, known as "contact" calls, or to show aggression during fights or confrontations, known as "agonistic" calls. They also observed an "ecstatic display song", sung by single birds during the mating season and the "mutual display song", a custom duet sung by nesting partners to each other. Juveniles and chicks produce calls relating to hunger. "There are two begging calls; the first one is where chicks utter 'begging peeps', short cheeps when they want food from adults, and the second one we've called 'begging moan', which is uttered by juveniles when they're out of the nest, but still need food from adults," said Favaro. The team made simultaneous video and audio recordings of 48 captive African penguins at the zoo Zoom Torino, over a 104 non-consecutive days. They then compared the audio recordings with the video footage of the birds' behaviour. Additional techniques, including visual inspection of spectrographs, produced statistical and quantifiable results. The research is published in the journal PLOS One. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19905 - Posted: 07.31.2014

By PAUL VITELLO The conventional wisdom among animal scientists in the 1950s was that birds were genetically programmed to sing, that monkeys made noise to vent their emotions, and that animal communication, in general, was less like human conversation than like a bodily function. Then Peter Marler, a British-born animal behaviorist, showed that certain songbirds not only learned their songs, but also learned to sing in a dialect peculiar to the region in which they were born. And that a vervet monkey made one noise to warn its troop of an approaching leopard, another to report the sighting of an eagle, and a third to alert the group to a python on the forest floor. These and other discoveries by Dr. Marler, who died July 5 in Winters, Calif., at 86, heralded a sea change in the study of animal intelligence. At a time when animal behavior was seen as a set of instinctive, almost robotic responses to environmental stimuli, he was one of the first scientists to embrace the possibility that some animals, like humans, were capable of learning and transmitting their knowledge to other members of their species. His hypothesis attracted a legion of new researchers in ethology, as animal behavior research is also known, and continues to influence thinking about cognition. Dr. Marler, who made his most enduring contributions in the field of birdsong, wrote more than a hundred papers during a long career that began at Cambridge University, where he received his Ph.D. in zoology in 1954 (the second of his two Ph.D.s.), and that took him around the world conducting field research while teaching at a succession of American universities. Dr. Marler taught at the University of California, Berkeley, from 1957 to 1966; at Rockefeller University in New York from 1966 to 1989; and at the University of California, Davis, where he led animal behavior research, from 1989 to 1994. He was an emeritus professor there at his death. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 19885 - Posted: 07.28.2014

By Meeri Kim Babies start with simple vowel sounds — oohs and aahs. A mere months later, the cooing turns into babbling — “bababa” — showing off a newfound grasp of consonants. A new study has found that a key part of the brain involved in forming speech is firing away in babies as they listen to voices around them. This may represent a sort of mental rehearsal leading up to the true milestone that occurs after only a year of life: baby’s first words. Any parent knows how fast babies learn how to comprehend and use language. The skill develops so rapidly and seemingly without much effort, but how do they do it? Researchers at the University of Washington are a step closer to unraveling the mystery of how babies learn how to speak. They had a group of 7- and 11-month-old infants listen to a series of syllables while sitting in a brain scanner. Not only did the auditory areas of their brains light up as expected but so did a region crucial to forming higher-level speech, called Broca’s area. A year-old baby sits in a brain scanner, called magnetoencephalography -- a noninvasive approach to measuring brain activity. The baby listens to speech sounds like "da" and "ta" played over headphones while researchers record her brain responses. (Institute for Learning and Brain Sciences, University of Washington) These findings may suggest that even before babies utter their first words, they may be mentally exercising the pivotal parts of their brains in preparation. Study author and neuroscientist Patricia Kuhl says that her results reinforce the belief that talking and reading to babies from birth is beneficial for their language development, along with exaggerated speech and mouth movements (“Hiii cuuutie! How are youuuuu?”). © 1996-2014 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19858 - Posted: 07.21.2014

By Helen Briggs Health editor, BBC News website The same genes drive maths and reading ability, research suggests. Around half of the genes that influence a child's aptitude for reading also play a role in how easily they learn maths, say scientists. The study of 12-year-old British twins from 3,000 families, reported in Nature Communications, adds to the debate about the role of genes in education. An education expert said the work had little relevance for public policy as specific genes had not been identified. Past research suggests both nature and nurture have a similar impact on how children perform in exams. One study found genes explained almost 60% of the variation in GCSE exam results. However, little is known about which genes are involved and how they interact. The new research suggests a substantial overlap between the genetic variations that influence mathematics and reading, say scientists from UCL, the University of Oxford and King's College London. But non-genetic factors - such as parents, schools and teachers - are also important, said Prof Robert Plomin of King's College London, who worked on the study. "The study does not point to specific genes linked to literacy or numeracy, but rather suggests that genetic influence on complex traits, like learning abilities, and common disorders, like learning disabilities, is caused by many genes of very small-effect size," he said. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19808 - Posted: 07.09.2014

Learning a second language benefits the brain in ways that can pay off later in life, suggests a deepening field of research that specializes in the relationship between bilingualism and cognition. In one large Scottish test, researchers discovered archival data on 835 native speakers of English who were born in Edinburgh in 1936. The participants had been given an intelligence test at age 11 as part of standard British educational policy and many were retested in their early 70s. Those who spoke two or more languages had significantly better cognitive abilities on certain tasks compared with what would be expected from their IQ test scores at age 11, Dr. Thomas Bak of the Centre for Cognitive Aging and Cognitive Epidemiology at the University of Edinburgh reported in the journal Annals of Neurology. "Our results suggest a protective effect of bilingualism against age-related cognitive decline," independently of IQ, Bak and his co-authors concluded. It was a watershed study in 1962 by Elizabeth Peal and Wallace Lambert at McGill University in Montreal that turned conventional thinking on bilingualism on its head and set the rationale for French immersion in Canada. Psychologists at York University in Toronto have also been studying the effect of bilingualism on the brain across the lifespan, including dementia. They’ve learned how people who speak a second language outperform those with just one on tasks that tap executive function such as attention, selection and inhibition. Those are the high-level cognitive processes we use to multitask as we drive on the highway and juggle remembering the exit and monitoring our speed without getting distracted by billboards. © CBC 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19781 - Posted: 07.02.2014

Carl Zimmer A novelist scrawling away in a notebook in seclusion may not seem to have much in common with an NBA player doing a reverse layup on a basketball court before a screaming crowd. But if you could peer inside their heads, you might see some striking similarities in how their brains were churning. That’s one of the implications of new research on the neuroscience of creative writing. For the first time, neuroscientists have used fMRI scanners to track the brain activity of both experienced and novice writers as they sat down — or, in this case, lay down — to turn out a piece of fiction. The researchers, led by Martin Lotze of the University of Greifswald in Germany, observed a broad network of regions in the brain working together as people produced their stories. But there were notable differences between the two groups of subjects. The inner workings of the professionally trained writers in the bunch, the scientists argue, showed some similarities to people who are skilled at other complex actions, like music or sports. The research is drawing strong reactions. Some experts praise it as an important advance in understanding writing and creativity, while others criticize the research as too crude to reveal anything meaningful about the mysteries of literature or inspiration. Dr. Lotze has long been intrigued by artistic expression. In previous studies, he has observed the brains of piano players and opera singers, using fMRI scanners to pinpoint regions that become unusually active in the brain. Needless to say, that can be challenging when a subject is singing an aria. Scanners are a lot like 19th-century cameras: They can take very sharp pictures, if their subject remains still. To get accurate data, Dr. Lotze has developed software that can take into account fluctuations caused by breathing or head movements. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19756 - Posted: 06.21.2014

—By Indre Viskontas He might be fictional. But the gigantic Hodor, a character in the blockbuster Game of Thrones series, nonetheless sheds light on something very much in the realm of fact: how our ability to speak emerges from a complex ball of neurons, and how certain brain-damaged patients can lose very specific aspects of that ability. According to George R.R. Martin, who wrote the epic books that inspired the HBO show, the 7-foot-tall Hodor could only say one word—"Hodor"—and everyone therefore tended to assume that was his name. Here's one passage about Hodor from the first novel in Martin's series: Theon Greyjoy had once commented that Hodor did not know much, but no one could doubt that he knew his name. Old Nan had cackled like a hen when Bran told her that, and confessed that Hodor's real name was Walder. No one knew where "Hodor" had come from, she said, but when he started saying it, they started calling him by it. It was the only word he had. Yet it's clear that Hodor can understand much more than he can say; he's able to follow instructions, anticipate who needed help, and behave in socially appropriate ways (mostly). Moreover, he says this one word in many different ways, implying very different meanings: So what might be going on in Hodor's brain? Hodor's combination of impoverished speech production with relatively normal comprehension is a classic, albeit particularly severe, presentation of expressive aphasia, a neurological condition usually caused by a localized stroke in the front of the brain, on the left side. Some patients, however, have damage to that part of the brain from other causes, such as a tumor, or a blow to the head. ©2014 Mother Jones

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19753 - Posted: 06.21.2014

by Bethany Brookshire Human vocal chords can produce an astonishing array of sounds: shrill and fearful, low and sultry, light and breathy, loud and firm. The slabs of muscle in our throat make the commanding sound of a powerful bass and a baby’s delightful, gurgling laugh. There are voices that must be taken seriously, voices that play and voices that seduce. And then there’s vocal fry. Bringing to mind celebrity voices like Kim Kardashian or Zooey Deschanel, vocal fry is a result of pushing the end of words and sentences into the lowest vocal register. When forcing the voice low, the vocal folds in the throat vibrate irregularly, allowing air to slip through. The result is a low, sizzling rattle underneath the tone. Recent studies have documented growing popularity of vocal fry among young women in the United States. But popular sizzle in women’s speech might be frying their job prospects, a new study reports. The findings suggest that people with this vocal affectation might want to hold the fry on the job market — and that people on the hiring side of the table might want to examine their biases. Vocal fry has been recognized since the 1970s, but now it’s thought of as a fad. Study coauthor Casey Klofstad, a political scientist at the University of Miami in Goral Gables, Fla., says that the media attention surrounding vocal fry generated a lot of speculation. “It is a good thing? Is it bad? It gave us a clear question we could test,” he says. Specifically, they wanted to study whether vocal fry had positive or negative effects on how people who used the technique were perceived. Led by Rindy Anderson from Duke University, the researchers recorded seven young men and seven young women speaking the phrase “Thank you for considering me for this opportunity.” Each person spoke the phrase twice, once with vocal fry and once without. Then the authors played the recordings to 800 participants ages 18 to 65, asking them to make judgments about the candidates based on voice alone. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 19718 - Posted: 06.10.2014

Learning a second language can have a positive effect on the brain, even if it is taken up in adulthood, a University of Edinburgh study suggests. Researchers found that reading, verbal fluency and intelligence were improved in a study of 262 people tested either aged 11 or in their seventies. A previous study suggested that being bilingual could delay the onset of dementia by several years. The study is published in Annals of Neurology. The big question in this study was whether learning a new language improved cognitive functions or whether individuals with better cognitive abilities were more likely to become bilingual. Dr Thomas Bak, from the Centre for Cognitive Ageing and Cognitive Epidemiology at the University of Edinburgh, said he believed he had found the answer. Using data from intelligence tests on 262 Edinburgh-born individuals at the age of 11, the study looked at how their cognitive abilities had changed when they were tested again in their seventies. The research was conducted between 2008 and 2010. All participants said they were able to communicate in at least one language other than English. Of that group, 195 learned the second language before the age of 18, and 65 learned it after that time. The findings indicate that those who spoke two or more languages had significantly better cognitive abilities compared to what would have been expected from their baseline test. The strongest effects were seen in general intelligence and reading. The effects were present in those who learned their second language early, as well as later in life. BBC © 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19679 - Posted: 06.02.2014

Brian Owens If you think you know what you just said, think again. People can be tricked into believing they have just said something they did not, researchers report this week. The dominant model of how speech works is that it is planned in advance — speakers begin with a conscious idea of exactly what they are going to say. But some researchers think that speech is not entirely planned, and that people know what they are saying in part through hearing themselves speak. So cognitive scientist Andreas Lind and his colleagues at Lund University in Sweden wanted to see what would happen if someone said one word, but heard themselves saying another. “If we use auditory feedback to compare what we say with a well-specified intention, then any mismatch should be quickly detected,” he says. “But if the feedback is instead a powerful factor in a dynamic, interpretative process, then the manipulation could go undetected.” In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said. Lind chose the words ‘grey’ and ‘green’ (grå and grön in Swedish) to switch, as they sound similar but have different meanings. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 19562 - Posted: 05.03.2014

Does reading faster mean reading better? That’s what speed-reading apps claim, promising to boost not just the number of words you read per minute, but also how well you understand a text. There’s just one problem: The same thing that speeds up reading actually gets in the way of comprehension, according to a new study. When you read at your natural pace, your eyes move back and forth across a sentence, rather than plowing straight through to the end. Apps like Spritz or the aptly named Speed Read are built around the idea that these eye movements, called saccades, are a redundant waste of time. It’s more efficient, their designers claim, to present words one at a time in a fixed spot on a screen, discouraging saccades and helping you get through a text more quickly. This method, called rapid serial visual presentation (RSVP), has been controversial since the 1980s, when tests showed it impaired comprehension, though researchers weren’t quite sure why. With a new crop of speed-reading products on the market, psychologists decided to dig a bit more and uncovered a simple explanation for RSVP’s flaw: Every so often, we need to scan backward and reread for a better grasp of the material. Researchers demonstrated that need by presenting 40 college students with ambiguous, unpunctuated sentences ("While the man drank the water that was clear and cold overflowed from the toilet”) while following their subjects’ gaze with an eye-tracking camera. Half the time, the team crossed out words participants had already read, preventing them from rereading (“xxxxx xxx xxx drank the water …”). Following up with basic yes-no questions about each sentence’s content, they found that comprehension dropped by about 25% in trials that blocked rereading versus those that didn’t, the researchers report online this month in Psychological Science. Crucially, the drop was about the same when subjects could, but simply hadn’t, reread parts of a sentence. Nor did the results differ much when using ambiguous sentences or their less confusing counterparts (“While the man slept the water …”). Turns out rereading isn’t a waste of time—it’s essential for understanding. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Language and Our Divided Brain
Link ID: 19534 - Posted: 04.26.2014

I am a sociologist by training. I come from academic world, reading scholarly articles on topics of social import, but they're almost always boring, dry and quickly forgotten. Yet I can't count how many times I've gone to a movie, a theater production or read a novel and been jarred into seeing something differently, learned something new, felt deep emotions and retained the insights gained. I know from both my research and casual conversations with people in daily life that my experiences are echoed by many. The arts can tap into issues that are otherwise out of reach and reach people in meaningful ways. This realization brought me to arts-based research (ABR). Arts-based research is an emergent paradigm whereby researchers across the disciplines adapt the tenets of the creative arts in their social research projects. Arts-based research, a term first coined by Eliot Eisner at Stanford University in the early 90s, is based on the assumption that art can teach us in ways that other forms cannot. Scholars can take interview or survey research, for instance, and represent it through art. I've written two novels based on sociological interview research. Sometimes researchers use the arts during data collection, involving research participants in the art-making process, such as drawing their response to a prompt rather than speaking. The turn by many scholars to arts-based research is most simply explained by my opening example of comparing the experience of consuming jargon-filled and inaccessible academic articles to that of experiencing artistic works. While most people know on some level that the arts can reach and move us in unique ways, there is actually science behind this. ©2014 TheHuffingtonPost.com, Inc

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 7: Vision: From Eye to Brain
Link ID: 19528 - Posted: 04.24.2014