Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 482

I am a sociologist by training. I come from academic world, reading scholarly articles on topics of social import, but they're almost always boring, dry and quickly forgotten. Yet I can't count how many times I've gone to a movie, a theater production or read a novel and been jarred into seeing something differently, learned something new, felt deep emotions and retained the insights gained. I know from both my research and casual conversations with people in daily life that my experiences are echoed by many. The arts can tap into issues that are otherwise out of reach and reach people in meaningful ways. This realization brought me to arts-based research (ABR). Arts-based research is an emergent paradigm whereby researchers across the disciplines adapt the tenets of the creative arts in their social research projects. Arts-based research, a term first coined by Eliot Eisner at Stanford University in the early 90s, is based on the assumption that art can teach us in ways that other forms cannot. Scholars can take interview or survey research, for instance, and represent it through art. I've written two novels based on sociological interview research. Sometimes researchers use the arts during data collection, involving research participants in the art-making process, such as drawing their response to a prompt rather than speaking. The turn by many scholars to arts-based research is most simply explained by my opening example of comparing the experience of consuming jargon-filled and inaccessible academic articles to that of experiencing artistic works. While most people know on some level that the arts can reach and move us in unique ways, there is actually science behind this. ©2014 TheHuffingtonPost.com, Inc

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 7: Vision: From Eye to Brain
Link ID: 19528 - Posted: 04.24.2014

By Daisy Yuhas Greetings from Boston where the 21st annual meeting of the Cognitive Neuroscience Society is underway. Saturday and Sunday were packed with symposia, lectures and more than 400 posters. Here are just a few of the highlights. The bilingual brain has been a hot topic at the meeting this year, particularly as researchers grapple with the benefits and challenges of language learning. In news that will make many college language majors happy, a group of researchers led by Harriet Wood Bowden of the University of Tennessee-Knoxville have demonstrated that years of language study alter a person’s brain processing to be more like a native speaker’s brain. They found that native English speaking students with about seven semesters of study in Spanish show very similar brain activation to native speakers when processing spoken Spanish grammar. The study used electroencephalography, or EEG, in which electrodes are placed along the scalp to pick up and measure the electrical activity of neurons in the brain below. By contrast, students who have more recently begun studying Spanish show markedly different processing of these elements of the language. The study focused on the recognition of noun-adjective agreement, particularly in gender and number. Accents, however, can remain harder to master. Columbia University researchers worked with native Spanish speakers to study the difficulties encountered in hearing and reproducing English vowel sounds that are not used in Spanish. The research focused on the distinction between the extended o sound in “dock” and the soft u sound in “duck,” which is not part of spoken Spanish. The scientists used electroencephalograms to measure the brain responses to these vowel sounds in native-English and native-Spanish speakers. © 2014 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 19467 - Posted: 04.10.2014

by Bob Holmes People instinctively organise a new language according to a logical hierarchy, not simply by learning which words go together, as computer translation programs do. The finding may add further support to the notion that humans possess a "universal grammar", or innate capacity for language. The existence of a universal grammar has been in hot dispute among linguists ever since Noam Chomsky first proposed the idea half a century ago. If the theory is correct, this innate structure should leave some trace in the way people learn languages. To test the idea, Jennifer Culbertson, a linguist at George Mason University in Fairfax, Virginia, and her colleague David Adger of Queen Mary University of London, constructed an artificial "nanolanguage". They presented English-speaking volunteers with two-word phrases, such as "shoes blue" and "shoes two", which were supposed to belong to a new language somewhat like English. They then asked the volunteers to choose whether "shoes two blue" or "shoes blue two" would be the correct three-word phrase. In making this choice, the volunteers – who hadn't been exposed to any three-word phrases – would reveal their innate bias in language-learning. Would they rely on familiarity ("two" usually precedes "blue" in English), or would they follow a semantic hierarchy and put "blue" next to "shoe" (because it modifies the noun more tightly than "two", which merely counts how many)? © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19433 - Posted: 04.01.2014

Imagine you’re calling a stranger—a possible employer, or someone you’ve admired from a distance—on the telephone for the first time. You want to make a good impression, and you’ve rehearsed your opening lines. What you probably don’t realize is that the person you’re calling is going to size you up the moment you utter “hello.” Psychologists have discovered that the simple, two-syllable sound carries enough information for listeners to draw conclusions about the speaker’s personality, such as how trustworthy he or she is. The discovery may help improve computer-generated and voice-activated technologies, experts say. “They’ve confirmed that people do make snap judgments when they hear someone’s voice,” says Drew Rendall, a psychologist at the University of Lethbridge in Canada. “And the judgments are made on very slim evidence.” Psychologists have shown that we can determine a great deal about someone’s personality by listening to them. But these researchers looked at what others hear in someone’s voice when listening to a lengthy speech, says Phil McAleer, a psychologist at the University of Glasgow in the United Kingdom and the lead author of the new study. No one had looked at how short a sentence we need to hear before making an assessment, although other studies had shown that we make quick judgments about people’s personalities from a first glance at their faces. “You can pick up clues about how dominant and trustworthy someone is within the first few minutes of meeting a stranger, based on visual cues,” McAleer says. To find out if there is similar information in a person’s voice, he and his colleagues decided to test “one of the quickest and shortest of sociable words, ‘Hello.’ ” © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 15: Emotions, Aggression, and Stress; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 15: Language and Our Divided Brain
Link ID: 19358 - Posted: 03.13.2014

by Aviva Rutkin "He moistened his lips uneasily." It sounds like a cheap romance novel, but this line is actually lifted from quite a different type of prose: a neuroscience study. Along with other sentences, including "Have you got enough blankets?" and "And what eyes they were", it was used to build the first map of how the brain processes the building blocks of speech – distinct units of sound known as phonemes. The map reveals that the brain devotes distinct areas to processing different types of phonemes. It might one day help efforts to read off what someone is hearing from a brain scan. "If you could see the brain of someone who is listening to speech, there is a rapid activation of different areas, each responding specifically to a particular feature the speaker is producing," says Nima Mesgarani, an electrical engineer at Columbia University in New York City. Snakes on a brain To build the map, Mesgarani's team turned to a group of volunteers who already had electrodes implanted in their brains as part of an unrelated treatment for epilepsy. The invasive electrodes sit directly on the surface of the brain, providing a unique and detailed view of neural activity. The researchers got the volunteers to listen to hundreds of snippets of speech taken from a database designed to provide an efficient way to cycle through a wide variety of phonemes, while monitoring the signals from the electrodes. As well as those already mentioned, sentences ran the gamut from "It had gone like clockwork" to "Junior, what on Earth's the matter with you?" to "Nobody likes snakes". © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19196 - Posted: 02.01.2014

by Laura Sanders Growing up, I loved it when my parents read aloud the stories of the Berenstain Bears living in their treehouse. So while I was pregnant with my daughter, I imagined lots of cuddly quiet time with her in a comfy chair, reading about the latest adventures of Brother and Sister. Of course, reality soon let me know just how ridiculous that idea was. My newborn couldn’t see more than a foot away, cried robustly and frequently for mysterious reasons, and didn’t really understand words yet. Baby V was simply not interested in the latest dispatch from Bear County. When I started reading child development expert Elaine Reese’s new book Tell Me a Story, I realized that I was not the only one with idyllic story time dreams. Babies and toddlers are squirmy, active people with short attention spans. “Why, then, do we cling to this soft-focus view of storytelling when we know it is unrealistic?” she writes. These days, as Baby V closes in on the 1-year mark, she has turned into a most definite book lover. But it’s not the stories that enchant her. It’s holding the book, turning its pages back to front to back again, flipping it over and generally showing it who’s in charge. Every so often I can entice Baby V to sit on my lap with a book, but we never read through a full story. Instead, we linger on the page with all the junk food that the Hungry Caterpillar chomps through, sticking our fingers in the little holes in the pages. And we make Froggy pop in and out of the bucket. And we study the little goats as they climb up and up and up on the hay bales. © Society for Science & the Public 2000 - 2014

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19155 - Posted: 01.21.2014

by Laura Sanders Baby V sailed through her first Christmas with the heart of a little explorer. She travelled to frigid upstate New York where she mashed snow in her cold little hands, tasted her great grandma’s twice baked potato and was licked clean by at least four dogs. And she opened lots of presents. It’s totally true what people say about little kids and gifts: The wrapping paper proved to be the biggest hit. But in the Christmas aftermath, one of Baby V’s new toys really caught her attention. She cannot resist her singing, talking book. The book has only three pages, but Baby V is smitten. Any time the book pipes up, which it seems to do randomly, she snaps to attention, staring at it, grabbing it and trying to figure it out. With a cutesy high-pitched voice, the book tells Baby V to “Turn the pa-AYE-ge!” and “This is fun!” Sometimes, the book bursts into little songs, all the while maintaining the cheeriest, squeakiest, sugarplum-drenched tone, even when it’s saying something kind of sad: “Three little kittens have lost their mittens and they began to cry!” The book maker (uh, author?) clearly knows how to tap into infants’ deep love for happy, squeaky noises, as does the creator of Elmo. Scientists are also noticing this trend, and are starting to figure out exactly why these sounds are so alluring to little ones. © Society for Science & the Public 2000 - 2014.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19111 - Posted: 01.08.2014

By Janelle Weaver Children with a large vocabulary experience more success at school and in the workplace. How much parents talk to their children plays a major role, but new research shows that it is not just the quantity but also the quality of parental input that matters. Helpful gestures and meaningful glances may allow kids to grasp concepts more easily than they otherwise would. In a study published in June in the Proceedings of the National Academy of Sciences USA, Erica Cartmill of the University of Chicago and her collaborators videotaped parents in their homes as they read books and played games with their 14- or 18-month-old children. The researchers created hundreds of 40-second muted video clips of these interactions. Another set of study participants watched the videos and used clues from the scenes to guess which nouns the parents were saying at various points in the sequences. The researchers used the accuracy of these guesses to rate how well a parent used nonverbal cues, such as gesturing toward and looking at objects, to clarify a word's meaning. Cartmill and her team found that the quality of parents' nonverbal signaling predicted the size of their children's vocabulary three years later. Surprisingly, socioeconomic status did not play a role in the quality of the parents' nonverbal signaling. This result suggests that the well-known differences in children's vocabulary size across income levels are likely the result of how much parents talk to their children, which is known to differ by income, rather than how much nonverbal help they offer during those interactions. © 2013 Scientific American

Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Language and Our Divided Brain
Link ID: 19028 - Posted: 12.12.2013

Elizabeth Pennisi Speak easy. The language gene FOXP2 may work through a protein partner that stimulates the formation of excitatory connections (green) in nerve cells (magenta). Few genes have made the headlines as much as FOXP2. The first gene associated with language disorders, it was later implicated in the evolution of human speech. Girls make more of the FOXP2 protein, which may help explain their precociousness in learning to talk. Now, neuroscientists have figured out how one of its molecular partners helps Foxp2 exert its effects. The findings may eventually lead to new therapies for inherited speech disorders, says Richard Huganir, the neurobiologist at Johns Hopkins University School of Medicine in Baltimore, Maryland, who led the work. Foxp2 controls the activity of a gene called Srpx2, he notes, which helps some of the brain's nerve cells beef up their connections to other nerve cells. By establishing what SRPX2 does, researchers can look for defective copies of it in people suffering from problems talking or learning to talk. Until 2001, scientists were not sure how genes influenced language. Then Simon Fisher, a neurogeneticist now at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, and his colleagues fingered FOXP2 as the culprit in a family with several members who had trouble with pronunciation, putting words together, and understanding speech. These people cannot move their tongue and lips precisely enough to talk clearly, so even family members often can’t figure out what they are saying. It “opened a molecular window on the neural basis of speech and language,” Fisher says. © 2013 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18865 - Posted: 11.02.2013

Daniel Cossins It may not always seem like it, but humans usually take turns speaking. Research published today in Current Biology1 shows that marmosets, too, wait for each other to stop calling before they respond during extended vocal exchanges. The discovery could help to explain how humans came to be such polite conversationalists. Taking turns is a cornerstone of human verbal communication, and is common across all languages. But with no evidence that non-human primates 'converse' similarly, it was not clear how such behaviour evolved. The widely accepted explanation, known as the gestural hypothesis, suggests that humans might somehow have taken the neural machinery underlying cooperative manual gestures such as pointing to something to attract another person's attention to it, and applied that to vocalization. Not convinced, a team led by Daniel Takahashi, a neurobiologist at Princeton University in New Jersey, wanted to see whether another primate species is capable of cooperative calling. The researchers turned to common marmosets (Callithrix jacchus) because, like humans, they are prosocial — that is, generally friendly towards each other — and they communicate using vocalizations. After you The team recorded exchanges between pairs of marmosets that could hear but not see each other, and found that the monkeys never called at the same time. Instead, they always waited for roughly 5 seconds after a caller had finished before responding. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18808 - Posted: 10.19.2013

by Bruce Bower Babies may start to learn their mother tongues even before seeing their mothers’ faces. Newborns react differently to native and foreign vowel sounds, suggesting that language learning begins in the womb, researchers say. Infants tested seven to 75 hours after birth treated spoken variants of a vowel sound in their home language as similar, evidence that newborns regard these sounds as members of a common category, say psychologist Christine Moon of Pacific Lutheran University in Tacoma, Wash., and her colleagues. Newborns deemed different versions of a foreign vowel sound to be dissimilar and unfamiliar, the scientists report in an upcoming Acta Paediatrica. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” Moon says. Fetuses can hear outside sounds by about 10 weeks before birth. Until now, evidence suggested that prenatal learning was restricted to the melody, rhythm and loudness of voices (SN: 12/5/09, p. 14). Earlier investigations established that 6-month-olds group native but not foreign vowel sounds into categories. Moon and colleagues propose that, in the last couple months of gestation, babies monitor at least some vowels — the loudest and most expressive speech sounds — uttered by their mothers. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18769 - Posted: 10.09.2013

By Helen Briggs BBC News The brain has a critical window for language development between the ages of two and four, brain scans suggest. Environmental influences have their biggest impact before the age of four, as the brain's wiring develops to process new words, say UK and US scientists. The research in The Journal of Neuroscience suggests disorders causing language delay should be tackled early. It also explains why young children are good at learning two languages. The scientists, based at King's College London, and Brown University, Rhode Island, studied 108 children with normal brain development between the ages of one and six. They used brain scans to look at myelin - the insulation that develops from birth within the circuitry of the brain. To their surprise, they found the distribution of myelin is fixed from the age of four, suggesting the brain is most plastic in very early life. Any environmental influences on brain development will be strongest in infanthood, they predict. This explains why immersing children in a bilingual environment before the age of four gives them the best chance of becoming fluent in both languages, the research suggests. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18768 - Posted: 10.09.2013

By Melissa Hogenboom Science reporter, BBC News Moving in time to a steady beat is closely linked to better language skills, a study suggests. People who performed better on rhythmic tests also showed enhanced neural responses to speech sounds. The researchers suggest that practising music could improve other skills, particularly speech. In the Journal of Neuroscience, the authors argue that rhythm is an integral part of language. "We know that moving to a steady beat is a fundamental skill not only for music performance but one that has been linked to language skills," said Nina Kraus, of the Auditory Neuroscience Laboratory at Northwestern University in Illinois. More than 100 teenagers were asked to tap their fingers along to a beat. Their accuracy was measured by how closely their responses matched the timing of a metronome. Next, in order to understand the biological basis of rhythmic ability, the team also measured the brainwaves of their participants with electrodes, a technique called electroencephalography. This was to observe the electrical activity in the brain in response to sound. Using this biological approach, the researchers found that those who had better musical training also had enhanced neural responses to speech sounds. In poorer readers this response was diminished. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 5: The Sensorimotor System
Link ID: 18665 - Posted: 09.18.2013

By Joshua K. Hartshorne There are two striking features of language that any scientific theory of this quintessentially human behavior must account for. The first is that we do not all speak the same language. This would be a shocking observation were not so commonplace. Communication systems and other animals tend to be universal, with any animal of the species able to communicate with any other. Likewise, many other fundamental human attributes show much less variation. Barring genetic or environmental mishap, we all have two eyes, one mouth, and four limbs. Around the world, we cry when we are sad, smile when we are happy, and laugh when something is funny, but the languages we use to describe this are different. The second striking feature of language is that when you consider the space of possible languages, most languages are clustered in a few tiny bands. That is, most languages are much, much more similar to one another than random variation would have predicted. Starting with pioneering work by Joseph Greenberg, scholars have cataloged over two thousand linguistic universals (facts true of all languages) and biases (facts true of most languages). For instance, in languages with fixed word order, the subject almost always comes before the object. If the verb describes a caused event, the entity that caused the event is the subject ("John broke the vase") not the object (for example, "The vase shbroke John" meaning "John broke the vase"). In languages like English where the verb agrees with one of its subjects or objects, it typically agrees with the subject (compare "the child eats the carrots" with "the children eat the carrots") and not with its object (this would look like "the child eats the carrot" vs. "the child eat the carrots"), though in some languages, like Hungarian, the ending of the verb changes to match both the subject and object. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18664 - Posted: 09.18.2013

By Jason G. Goldman One of the key differences between humans and non-human animals, it is thought, is the ability to flexibly communicate our thoughts to others. The consensus has long been that animal communication, such as the food call of a chimpanzee or the alarm call of a lemur, is the result of an automatic reflex guided primarily by the inner physiological state of the animal. Chimpanzees, for example, can’t “lie” by producing a food call when there’s no food around and, it is thought, they can’t not emit a food call in an effort to hoard it all for themselves. By contrast, human communication via language is far more flexible and intentional. But recent research from across the animal kingdom has cast some doubt on the idea that animal communication always operates below the level of conscious control. Male chickens, for example, call more when females are around, and male Thomas langurs (a monkey native to Indonesia) continue shrieking their alarm calls until all females in their group have responded. Similarly, vervet monkeys are more likely sound their alarm calls when their are other vervet monkeys around, and less likely when they’re alone. The same goes for meerkats. And possibly chimps, as well. Still, these sorts of “audience effects” can be explained by lower-level physiological factors. In yellow-bellied marmots, small ground squirrels native to the western US and southwestern Canada, the production of an alarm call correlates with glucocorticoid production, a physiological measurement of stress. And when researchers experimentally altered the synthesis of glucocorticoids in rhesus macaques, they found a change in the probability of alarm call production. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18605 - Posted: 09.04.2013

by Jacob Aron DOES your brain work like a dictionary? A mathematical analysis of the connections between definitions of English words has uncovered hidden structures that may resemble the way words and their meanings are represented in our heads. "We want to know how the mental lexicon is represented in the brain," says Stevan Harnad of the University of Quebec in Montreal, Canada. As every word in a dictionary is defined in terms of others, the knowledge needed to understand the entire lexicon is there, as long as you first know the meanings of an initial set of starter, or "grounding", words. Harnad's team reasoned that finding this minimal set of words and pinning down its structure might shed light on how human brains put language together. The team converted each of four different English dictionaries into a mathematical structure of linked nodes known as a graph. Each node in this graph represents a word, which is linked to the other words used to define it – so "banana" might be connected to "long", "bendy", "yellow" and "fruit". These words then link to others that define them. This enabled the team to remove all the words that don't define any others, leaving what they call a kernel. The kernel formed roughly 10 per cent of the full dictionary – though the exact percentages depended on the particular dictionary. In other words, 90 per cent of the dictionary can be defined using just the other 10 per cent. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18587 - Posted: 08.31.2013

Beth Skwarecki Be careful what you say around a pregnant woman. As a fetus grows inside a mother's belly, it can hear sounds from the outside world—and can understand them well enough to retain memories of them after birth, according to new research. It may seem implausible that fetuses can listen to speech within the womb, but the sound-processing parts of their brain become active in the last trimester of pregnancy, and sound carries fairly well through the mother's abdomen. "If you put your hand over your mouth and speak, that's very similar to the situation the fetus is in," says cognitive neuroscientist Eino Partanen of the University of Helsinki. "You can hear the rhythm of speech, rhythm of music, and so on." A 1988 study suggested that newborns recognize the theme song from their mother's favorite soap opera. More recent studies have expanded on the idea of fetal learning, indicating that newborns already familiarized themselves with sounds of their parent’s native language; one showed that American newborns seem to perceive Swedish vowel sounds as unfamiliar, sucking on a high-tech pacifier to hear more of the new sounds. Swedish infants showed the same response to English vowels. But those studies were based on babies' behaviors, which can be tricky to test. Partanen and his team decided instead to outfit babies with EEG sensors to look for neural traces of memories from the womb. "Once we learn a sound, if it's repeated to us often enough, we form a memory of it, which is activated when we hear the sound again," he explains. This memory speeds up recognition of sounds in the learner's native language and can be detected as a pattern of brain waves, even in a sleeping baby. © 2012 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18570 - Posted: 08.27.2013

By Glen Tellis, Rickson C. Mesquita, and Arjun G. Yodh Terrence Murgallis, a 20 year-old undergraduate student in the Department of Speech-Language Pathology at Misericordia University has stuttered all his life and approached us recently about conducting brain research on stuttering. His timing was perfect because our research group, in collaboration with a team led by Dr. Arjun Yodh in the Department of Physics and Astronomy at the University of Pennsylvania, had recently deployed two novel optical methods to compare blood flow and hemoglobin concentration differences in the brains of those who stutter with those who are fluent. These noninvasive methods employ diffusing near-infrared light and have been dubbed near-infrared spectroscopy (NIRS) for concentration dynamics, and diffuse correlation spectroscopy (DCS) for flow dynamics. The near-infrared light readily penetrates through intact skull to probe cortical regions of the brain. The low power light has no known side-effects and has been successfully utilized for a variety of clinical studies in infants, children, and adults. DCS measures fluctuations of scattered light due to moving targets in the tissue (mostly red blood cells). The technique measures relative changes in cerebral blood flow. NIRS uses the relative transmission of different colors of light to detect hemoglobin concentration changes in the interrogated tissues. Though there are numerous diagnostic tools available to study brain activity, including positron emission tomography (PET), magnetic resonance imaging (MRI), and magnetoencephalography (MEG), these methods are often invasive and/or expensive to administer. In the particular case of electroencephalography (EEG), its low spatial resolution is a significant limitation for investigations of verbal fluency. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 5: The Sensorimotor System
Link ID: 18426 - Posted: 07.30.2013

By Michelle Warwicker BBC Nature Individual wild wolves can be recognised by just their howls with 100% accuracy, a study has shown. The team from Nottingham Trent University, UK, developed a computer program to analyse the vocal signatures of eastern grey wolves. Wolves roam huge home ranges, making it difficult for conservationists to track them visually. But the technology could provide a way for experts to monitor individual wolves by sound alone. "Wolves howl a lot in the wild," said PhD student Holly Root-Gutteridge, who led the research. "Now we can be sure... exactly which wolf it is that's howling." The team's findings are published in the journal Bioacoustics. Wolves use their distinctive calls to protect territory from rivals and to call to other pack members. "They enjoy it as a group activity," said Ms Root-Gutteridge, "When you get a chorus howl going they all join in." The team's computer program is unique because it analyses both volume (or amplitude) and pitch (or frequency) of wolf howls, whereas previously scientists had only examined the animals' pitch. "Think of [pitch] as the note the wolf is singing," explained Ms Root-Gutteridge. "What we've added now is the amplitude - or volume - which is basically how loud it's singing at different times." "It's a bit like language: If you put the stress in different places you form a different sound." BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18401 - Posted: 07.23.2013

By Rebecca Morelle Science reporter, BBC World Service Scientists have found further evidence that dolphins call each other by "name". Research has revealed that the marine mammals use a unique whistle to identify each other. A team from the University of St Andrews in Scotland found that when the animals hear their own call played back to them, they respond. The study is published in the Proceedings of the National Academy of Sciences. Dr Vincent Janik, from the university's Sea Mammal Research Unit, said: "(Dolphins) live in this three-dimensional environment, offshore without any kind of landmarks and they need to stay together as a group. "These animals live in an environment where they need a very efficient system to stay in touch." It had been-long suspected that dolphins use distinctive whistles in much the same way that humans use names. Previous research found that these calls were used frequently, and dolphins in the same groups were able to learn and copy the unusual sounds. But this is the first time that the animals response to being addressed by their "name" has been studied. To investigate, researchers recorded a group of wild bottlenose dolphins, capturing each animal's signature sound. BBC © 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18400 - Posted: 07.23.2013