Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 121 - 140 of 689

By Daniel Barron After prepping for the day’s cases, “Mike Brennan,” a 63-year-old cardiology technician, sat down for his morning coffee and paper. On the front page, he discovered something troubling: he could no longer read. No matter how long he stared at a word, its meaning was lost on him. With a history of smoking and hypertension, he worried that he might have had a stroke. So, leaving his coffee, he walked himself down the hall to the emergency department, where neurologists performed a battery of tests to tease out what had happened. Mike still recognized individual letters and, with great difficulty, could sound out small words. But even some simple vocabulary presented problems, for example, he read “desk” as “dish” or “flame” as “thame.” Function words such as prepositions and pronouns gave him particular trouble. Mike couldn’t read, but there was nothing wrong with his eyes. Words heard were no problem. He could recognize colors, faces, and objects. He could speak, move, think and even write normally. Mike had “pure alexia,” meaning he could not read but showed no other impairments. An M.R.I. scan of Mike’s brain revealed a pea-sized stroke in his left inferior occipitotemporal cortex, a region on the brain’s surface just behind the left ear. © 2016 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22586 - Posted: 08.23.2016

By NICHOLAS ST. FLEUR Orangutan hear, orangutan do. Researchers at the Indianapolis Zoo observed an orangutan mimic the pitch and tone of human sounds, for the first time. The finding, which was published Wednesday, provides insight into the evolutionary origin of human speech, the team said. “It really redefines for us what we know about the capabilities of orangutans,” said Rob Shumaker, director of the zoo and an author on the paper. “What we have to consider now is the possibility that the origins of spoken language are not exclusively human, and that they may have come from great apes.” Rocky, an 11-year-old orangutan at the zoo, has a special ability. He can make sounds using his vocal folds, or voice box, that resemble the vowel “A,” and sound like “Ah.” The noises, or “wookies” as the researchers called them, are variations of the same vocalization. Sometimes the great ape would say high-pitched “wookies” and sometimes he would say his “Ahs” in a lower pitch. The researchers note that the sounds are specific to Rocky and ones that he used everyday. No other orangutan, captive or wild, made these noises. Rocky, who had never lived in the rain forest, apparently learned the skill during his time as an entertainment orangutan before coming to the zoo. He was at one point the most seen orangutan in movies and commercials, according to the zoo. The researchers said that Rocky’s grunts show that great apes have the capacity to learn to control their muscles to deliberately alter their sounds in a “conversational” manner. The findings, which were published in the journal Scientific Reports, challenge the notion that orangutans — an endangered species that shares about 97 percent of it DNA with humans — make noises simply in response to something, sort of like how you might scream when you place your hand on a hot stove. © 2016 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22495 - Posted: 07.30.2016

An orangutan copying sounds made by researchers offers new clues to how human speech evolved, scientists say. Rocky mimicked more than 500 vowel-like noises, suggesting an ability to control his voice and make new sounds. It had been thought these great apes were unable to do this and, since human speech is a learned behaviour, it could not have originated from them. Study lead Dr Adriano Lameira said this "notion" could now be thrown "into the trash can". Dr Lameira, who conducted the research at Amsterdam University prior to joining Durham University, said Rocky's responses had been "extremely accurate". The team wanted to make sure the ape produced a new call, rather than adapting a "normal orangutan call with a personal twist" or matching sounds randomly or by coincidence, he said. The new evidence sets the "start line for scientific inquiry at a higher level", he said. "Ultimately, we should be now in a better position to think of how the different pieces of the puzzle of speech evolution fit together." The calls Rocky made were different from those collected in a large database of recordings, showing he was able to learn and produce new sounds rather than just match those already in his "vocabulary". In a previous study Dr Lameira found a female orangutan at Cologne Zoo in Germany was able to make sounds with a similar pace and rhythm to human speech. Researchers were "astounded" by Tilda's vocal skills but could not prove they had been learned, he said. However, the fact that "other orangutans seem to be exhibiting equivalent vocal skills shows that Rocky is not a bizarre or abnormal individual", Dr Lameira said. © 2016 BBC.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22482 - Posted: 07.27.2016

By Anthea Rowan The neurologist does not cushion his words. He tells us how it is: “She won’t read again.” I am standing behind my mother. I feel her stiffen. We do not talk of this revelation for days — and when we do, we do it in the garden of the rehab facility where she is recovering from a stroke. The stroke has scattered her memory, but she has not forgotten she will apparently not read again. I was shocked by what the doctor said, she confides. Me, too. Do you believe him? she asks. No — I am emphatic, for her and for me — I don’t. Mum smiles: “Me neither.” The damage wreaked by Mum’s stroke leaked across her brain, set up roadblocks so that the cerebral circuit board fizzes and pops uselessly, with messages no longer neatly passing from “A” to “B.” I tell the neuro: “I thought they’d learn to go via ‘D’ or ‘W.’ Isn’t that what’s supposed to happen — messages reroute?” “Unlikely,” he responds. “In your mother’s case.” Alexia — the loss of the ability to read — is common after strokes, especially, as in my mother’s case, when damage is wrought in the brain’s occipital lobe, which processes visual information. Pure alexia, which is Mum’s diagnosis, is much more rare: She can still write and touch-type, but bizarrely, she cannot read.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22392 - Posted: 07.04.2016

By Karin Brulliard Think about how most people talk to babies: Slowly, simply, repetitively, and with an exaggerated tone. It’s one way children learn the uses and meanings of language. Now scientists have found that some adult birds do that when singing to chicks — and it helps the baby birds better learn their song. The subjects of the new study, published last week in the journal Proceedings of the National Academy of Sciences, were zebra finches. They’re good for this because they breed well in a lab environment, and “they’re just really great singers. They sing all the time,” said McGill University biologist and co-author Jon Sakata. The males, he means — they’re the singers, and they do it for fun and when courting ladies, as well as around baby birds. Never mind that their melody is more “tinny,” according to Sakata, than pretty. Birds in general are helpful for vocal acquisition studies because they, like humans, are among the few species that actually have to learn how to make their sounds, Sakata said. Cats, for example, are born knowing how to meow. But just as people pick up speech and bats learn their calls, birds also have to figure out how to sing their special songs. Sakata and his colleagues were interested in how social interactions between adult zebra finches and chicks influences that learning process. Is face-to-face — or, as it may be, beak-to-beak — learning better? Does simply hearing an adult sing work as well as watching it do so? Do daydreaming baby birds learn as well as their more focused peers? © 1996-2016 The Washington Post

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22286 - Posted: 06.06.2016

By RUSSELL GOLDMAN There’s an elephant at a zoo outside Seoul that speaks Korean. — You mean, it understands some Korean commands, the way a dog can be trained to understand “sit” or “stay”? No, I mean it can actually say Korean words out loud. — Pics or it didn’t happen. Here, watch the video. To be fair, the elephant, a 26-year-old Asian male named Koshik, doesn’t really speak Korean, any more than a parrot can speak Korean (or English or Klingon). But parrots are supposed to, well, parrot — and elephants are not. And Koshik knows how to say at least five Korean words, which are about five more than I do. The really amazing part is how he does it. Koshik places his trunk inside his mouth and uses it to modulate the tone and pitch of the sounds his voice makes, a bit like a person putting his fingers in his mouth to whistle. In this way, Koshik is able to emulate human speech “in such detail that Korean native speakers can readily understand and transcribe the imitations,” according to the journal Current Biology. What’s in his vocabulary? Things he hears all the time from his keepers: the Korean words for hello, sit down, lie down, good and no. Elephant Speaks Korean | Video Video by LiveScienceVideos Lest you think this is just another circus trick that any Jumbo, Dumbo or Babar could pull off, the team of international scientists who wrote the journal article say Koshik’s skills represent “a wholly novel method of vocal production and formant control in this or any other species.” Like many innovations, Koshik’s may have been born of sad necessity. Researchers say he started to imitate his keepers’s sounds only after he was separated from other elephants at the age of 5 — and that his desire to speak like a human arose from sheer loneliness. © 2016 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22253 - Posted: 05.26.2016

By BENEDICT CAREY Listening to music may make the daily commute tolerable, but streaming a story through the headphones can make it disappear. You were home; now you’re at your desk: What happened? Storytelling happened, and now scientists have mapped the experience of listening to podcasts, specifically “The Moth Radio Hour,” using a scanner to track brain activity. In a paper published Wednesday by the journal Nature, a research team from the University of California, Berkeley, laid out a detailed map of the brain as it absorbed and responded to a story. Widely dispersed sensory, emotional and memory networks were humming, across both hemispheres of the brain; no story was “contained” in any one part of the brain, as some textbooks have suggested. The team, led by Alexander Huth, a postdoctoral researcher in neuroscience, and Jack Gallant, a professor of psychology, had seven volunteers listen to episodes of “The Moth” — first-person stories of love, loss, betrayal, flight from an abusive husband, and more — while recording brain activity with an M.R.I. machine. Sign Up for the Science Times Newsletter Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos. Using novel computational methods, the group broke down the stories into units of meaning: social elements, for example, like friends and parties, as well as locations and emotions . They found that these concepts fell into 12 categories that tended to cause activation in the same parts of people’s brains at the same points throughout the stories. They then retested that model by seeing how it predicted M.R.I. activity while the volunteers listened to another Moth story. Would related words like mother and father, or times, dates and numbers trigger the same parts of people’s brains? The answer was yes. © 2016 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 22162 - Posted: 04.30.2016

By Andy Coghlan “I’ve become resigned to speaking like this,” he says. The 17-year old boy’s mother tongue is Dutch, but for his whole life he has spoken with what sounds like a French accent. “This is who I am and it’s part of my personality,” says the boy, who lives in Belgium – where Dutch is an official language – and prefers to remain anonymous. “It has made me stand out as a person.” No matter how hard he tries, his speech sounds French. About 140 cases of foreign accent syndrome (FAS) have been described in scientific studies, but most of these people developed the condition after having a stroke. In the UK, for example, a woman in Newcastle who’d had a stroke in 2006 woke up with a Jamaican accent. Other British cases include a woman who developed a Chinese accent, and another who acquired a pronounced French-like accent overnight following a bout of cerebral vasculitis. But the teenager has had the condition from birth, sparking the interest of Jo Verhoeven of City University London and his team. Scans revealed that, compared with controls, the flow of blood to two parts of the boy’s brain were significantly reduced. One of these was the prefrontal cortex of the left hemisphere – a finding unsurprising to the team, as it is known to be associated with planning actions including speech. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22161 - Posted: 04.30.2016

Ian Sample Science editor Scientists have created an “atlas of the brain” that reveals how the meanings of words are arranged across different regions of the organ. Like a colourful quilt laid over the cortex, the atlas displays in rainbow hues how individual words and the concepts they convey can be grouped together in clumps of white matter. “Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. No single brain region holds one word or concept. A single brain spot is associated with a number of related words. And each single word lights up many different brain spots. Together they make up networks that represent the meanings of each word we use: life and love; death and taxes; clouds, Florida and bra. All light up their own networks. Described as a “tour de force” by one researcher who was not involved in the study, the atlas demonstrates how modern imaging can transform our knowledge of how the brain performs some of its most important tasks. With further advances, the technology could have a profound impact on medicine and other fields. “It is possible that this approach could be used to decode information about what words a person is hearing, reading, or possibly even thinking,” said Alexander Huth, the first author on the study. One potential use would be a language decoder that could allow people silenced by motor neurone disease or locked-in syndrome to speak through a computer. © 2016 Guardian News and Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 22157 - Posted: 04.28.2016

Cassie Martin The grunts, moans and wobbles of gelada monkeys, a chatty species residing in Ethiopia’s northern highlands, observe a universal mathematical principle seen until now only in human language. The new research, published online April 18 in the Proceedings of the National Academy of Sciences, sheds light on the evolution of primate communication and complex human language, the researchers say. “Human language is like an onion,” says Simone Pika, head of the Humboldt Research Group at the Max Planck Institute for Ornithology in Seewiesen, Germany, who was not involved in the study. “When you peel back the layers, you find that it is based on these underlying mechanisms, many of which were already present in animal communication. This research neatly shows there is another ability already present.” As the number of individual calls in gelada vocal sequences increases, the duration of the calls tends to decrease — a relationship known as Menzerath’s law. One of those mechanisms is known as Menzerath’s law, a mathematical principle that states that the longer a construct, the shorter its components. In human language, for instance, longer sentences tend to comprise shorter words. The gelada study is the first to observe this law in the vocalizations of a nonhuman species. “There are aspects of communication and language that aren’t as unique as we think,” says study coauthor Morgan Gustison of the University of Michigan in Ann Arbor. © Society for Science & the Public 2000 - 2016

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22131 - Posted: 04.23.2016

By Catherine Matacic Simi Etedgi leans forward as she tells her story for the camera: The year was 1963, and she was just 15 as she left Morocco for Israel, one person among hundreds of thousands leaving for the new state. But her forward lean isn’t a casual gesture. Etedgi, now 68, is one of about 10,000 signers of Israeli Sign Language (ISL), a language that emerged only 80 years ago. Her lean has a precise meaning, signaling that she wants to get in an aside before finishing her tale. Her eyes sparkle as she explains that the signs used in the Morocco of her childhood are very different from those she uses now in Israel. In fact, younger signers of ISL use a different gesture to signal an aside—and they have different ways to express many other meanings as well. A new study presented at the Evolution of Language meeting here last month shows that the new generation has come up with richer, more grammatically complex utterances that use ever more parts of the body for different purposes. Most intriguing for linguists: These changes seem to happen in a predictable order from one generation to the next. That same order has been seen in young sign languages around the world, showing in visible fashion how linguistic complexity unfolds. This leads some linguists to think that they may have found a new model for the evolution of language. “This is a big hypothesis,” says cognitive scientist Ann Senghas of Barnard College in New York City, who has spent her life studying Nicaraguan Sign Language (NSL). “It makes a lot of predictions and tries to pull a lot of facts together into a single framework.” Although it’s too early to know what the model will reveal, linguists say it already may have implications for understanding how quickly key elements of language, from complex words to grammar, have evolved. © 2016 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22130 - Posted: 04.23.2016

By JEFFREY M. ZACKS and REBECCA TREIMAN OUR favorite Woody Allen joke is the one about taking a speed-reading course. “I read ‘War and Peace’ in 20 minutes,” he says. “It’s about Russia.” The promise of speed reading — to absorb text several times faster than normal, without any significant loss of comprehension — can indeed seem too good to be true. Nonetheless, it has long been an aspiration for many readers, as well as the entrepreneurs seeking to serve them. And as the production rate for new reading matter has increased, and people read on a growing array of devices, the lure of speed reading has only grown stronger. The first popular speed-reading course, introduced in 1959 by Evelyn Wood, was predicated on the idea that reading was slow because it was inefficient. The course focused on teaching people to make fewer back-and-forth eye movements across the page, taking in more information with each glance. Today, apps like SpeedRead With Spritz aim to minimize eye movement even further by having a digital device present you with a stream of single words one after the other at a rapid rate. Unfortunately, the scientific consensus suggests that such enterprises should be viewed with suspicion. In a recent article in Psychological Science in the Public Interest, one of us (Professor Treiman) and colleagues reviewed the empirical literature on reading and concluded that it’s extremely unlikely you can greatly improve your reading speed without missing out on a lot of meaning. Certainly, readers are capable of rapidly scanning a text to find a specific word or piece of information, or to pick up a general idea of what the text is about. But this is skimming, not reading. We can definitely skim, and it may be that speed-reading systems help people skim better. Some speed-reading systems, for example, instruct people to focus only on the beginnings of paragraphs and chapters. This is probably a good skimming strategy. Participants in a 2009 experiment read essays that had half the words covered up — either the beginning of the essay, the end of the essay, or the beginning or end of each individual paragraph. Reading half-paragraphs led to better performance on a test of memory for the passage’s meaning than did reading only the first or second half of the text, and it worked as well as skimming under time pressure. © 2016 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 22113 - Posted: 04.18.2016

By David Shultz Mice supposedly don't speak, so they can't stutter. But by tinkering with a gene that appears to be involved in human speech, researchers have created transgenic mice whose pups produce altered vocalizations in a way that is similar to stuttering in humans. The mice could make a good model for understanding stuttering; they could also shed more light on how mutations in the gene, called Gnptab, cause the speech disorder. Stuttering is one of the most common speech disorders in the world, affecting nearly one out of 100 adults in the United States. But the cause of the stammering, fragmented speech patterns remains unclear. Several years ago, researchers discovered that stutterers often have mutations in a gene called Gnptab. Like a dispatcher directing garbage trucks, Gnptab encodes a protein that helps to direct enzymes into the lysosome—a compartment in animal cells that breaks down waste and recycles old cellular machinery. Mutations to other genes in this system are known to lead to the buildup of cellular waste products and often result in debilitating diseases, such as Tay-Sachs. How mutations in Gnptab causes stuttered speech remains a mystery, however. To get to the bottom of things, neuroscientist Terra Barnes and her team at Washington University in St. Louis in Missouri produced mice with mutation in the Gnptab gene and studied whether it affected the ultrasonic vocalizations that newly born mouse pups emit when separated from their mothers. Determining whether a mouse is stuttering is no easy task; as Barnes points out, it can even be difficult to tell whether people are stuttering if they’re speaking a foreign language. So the team designed a computer program that listens for stuttering vocalization patterns independent of language. © 2016 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22110 - Posted: 04.16.2016

By Robin Wylie Bottlenose dolphins have been observed chattering while cooperating to solve a tricky puzzle – a feat that suggests they have a type of vocalisation dedicated to cooperating on problem solving. Holli Eskelinen of Dolphins Plus research institute in Florida and her colleagues at the University of Southern Mississippi presented a group of six captive dolphins with a locked canister filled with food. The canister could only be opened by simultaneously pulling on a rope at either end. The team conducted 24 canister trials, during which all six dolphins were present. Only two of the dolphins ever managed to crack the puzzle and get to the food. The successful pair was prolific, though: in 20 of the trials, the same two adult males worked together to open the food canister in a matter of few minutes. In the other four trials, one of the dolphins managed to solve the problem on its own, but this was much trickier and took longer to execute. But the real surprise came from recordings of the vocalisations the dolphins made during the experiment. The team found that when the dolphins worked together to open the canister, they made around three times more vocalisations than they did while opening the canister on their own or when there was either no canister present or no interaction with the canister in the pool. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22107 - Posted: 04.16.2016

By Catherine Matacic How does sign language develop? A new study shows that it takes less than five generations for people to go from simple, unconventional pantomimes—essentially telling a story with your hands—to stable signs. Researchers asked a group of volunteers to invent their own signs for a set of 24 words in four separate categories: people, locations, objects, and actions. Examples included “photographer,” “darkroom,” and “camera.” After an initial group made up the signs—pretending to shoot a picture with an old-fashioned camera for “photographer,” for example—they taught the signs to a new generation of learners. That generation then played a game where they tried to guess what sign another player in their group was making. When they got the answer right, they taught that sign to a new generation of volunteers. After a few generations, the volunteers stopped acting out the words with inconsistent gestures and started making them in ways that were more systematic and efficient. What’s more, they added markers for the four categories—pointing to themselves if the category were “person” or making the outline of a house if the category were “location,” for example—and they stopped repeating gestures, the researchers reported last month at the Evolution of Language conference in New Orleans, Louisiana. So in the video above, the first version of “photographer” is unpredictable and long, compared with the final version, which uses the person marker and takes just half the time. The researchers say their finding supports the work of researchers in the field, who have found similar patterns of development in newly emerging sign languages. The results also suggest that learning and social interaction are crucial to this development. © 2016 American Association for the Advancement of Science

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22084 - Posted: 04.09.2016

Laura Sanders NEW YORK — Lip-readers’ minds seem to “hear” the words their eyes see being formed. And the better a person is at lipreading, the more neural activity there is in the brain’s auditory cortex, scientists reported April 4 at the annual meeting of the Cognitive Neuroscience Society. Earlier studies have found that auditory brain areas are active during lipreading. But most of those studies focused on small bits of language — simple sentences or even single words, said study coauthor Satu Saalasti of Aalto University in Finland. In contrast, Saalasti and colleagues studied lipreading in more natural situations. Twenty-nine people read the silent lips of a person who spoke Finnish for eight minutes in a video. “We can all lip-read to some extent,” Saalasti said, and the participants, who had no lipreading experience, varied widely in their comprehension of the eight-minute story. In the best lip-readers, activity in the auditory cortex was quite similar to that evoked when the story was read aloud, brain scans revealed. The results suggest that lipreading success depends on a person’s ability to “hear” the words formed by moving lips, Saalasti said. Citations J. Alho et al. Similar brain responses to lip-read, read and listened narratives. Cognitive Neuroscience Society annual meeting, New York City, April 4, 2016. Further Reading © Society for Science & the Public 2000 - 2016.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22077 - Posted: 04.07.2016

By Catherine Matacic Twenty-three years ago, a bonobo named Kanzi (above) aced a test in understanding human language. But a new study reveals he may not be as brainy as scientists thought—at least when it comes to grammar. The original test consisted of 660 verbal commands, in English, that asked Kanzi to do things like "show me the hot water" and "pour cold water in the potty." Overall, the ape did well, responding correctly 71.5% of the time (compared with 66.6% for an infant human). But when the researchers asked him to perform an action on more than one item, his performance plummeted to just 22.2%, according to the new analysis. When he was asked to "give the lighter and the shoe to Rose," for example, he gave Rose the lighter, but no shoe. When asked to "give the water and the doggie to Rose," he gave her the toy dog, but no water. The cause? Animals like bonobos may have a harder time than humans in processing complex noun phrases like “water and doggie,” linguist Robert Truswell of the University of Edinburgh reported in New Orleans, Louisiana, this week at the Evolution of Language conference. This feature of grammar—which effectively “nests” one unit within the bigger construct of a sentence—is easily picked up by humans, allowing us to communicate—and understand—more complex ideas. But Truswell cautions that humans probably aren’t born with the ability to interpret this kind of nesting structure. Instead, we must be taught how to use it. © 2016 American Association for the Advancement of Science

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 22031 - Posted: 03.26.2016

Cathleen O'Grady When we speak, listen, read, or write, almost all of the language processing that happens in our brains goes on below the level of conscious awareness. We might be aware of grasping for a particular forgotten word, but we don’t actively think about linguistic concepts like morphemes (the building blocks of words, like the past tense morpheme “-ed”). Psycholinguists try to delve under the surface to figure out what’s actually going on in the brain, and how well this matches up with our theoretical ideas of how languages fit together. For instance, linguists talk about morphemes like “-ed”, but do our brains actually work with morphemes when we’re producing or interpreting language? That is, do theoretical linguistic concepts have any psychological reality? An upcoming paper in the journal Cognition suggests an unusual way to investigate this: by testing synaesthetes. Synaesthesia comes in many forms. Some synaesthetes associate musical tones or notes with particular colours; others attach personalities to letters or numbers. A huge number of synaesthetes have associations that are in some way linguistic, and one of the most common forms of all is grapheme-colour (GC) synaesthesia, which is the association of colours with particular letters or numbers. For instance, a GC synaesthete might have a consistent perception of the letter “A” being red. This association often extends to a whole word, so “ant” might be red, too. © 2016 Guardian News and Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 21937 - Posted: 02.27.2016

By Katy Waldman On May 10, 1915, renowned poet-cum-cranky-recluse Robert Frost gave a lecture to a group of schoolboys in Cambridge, Massachusetts. “Sounds in the mouths of men,” he told his audience, “I have found to be the basis of all effective expression.” Frost spent his career courting “the imagining ear”—that faculty of the reader that assigns to each sentence a melodic shape, one captured from life and tailored to a specific emotion. In letters and interviews, he’d use the example of “two people who are talking on the other side of a closed door, whose voices can be heard but whose words cannot be distinguished. Even though the words do not carry, the sound of them does, and the listener can catch the meaning of the conversation. This is because every meaning has a particular sound-posture.” Frost’s preoccupation with the music of speech—with what we might call “tone of voice,” or the rise and fall of vocal pitch, intensity, and duration—has become a scientific field. Frost once wrote his friend John Freeman that this quality “is the unbroken flow on which [the semantic meanings of words] are carried along like sticks and leaves and flowers.” Neuroimaging bears him out, revealing that our brains process speech tempo, intonation, and dynamics more quickly than they do linguistic content. (Which shouldn’t come as a huge surprise: We vocalized at each other for millions of years before inventing symbolic language.) Psychologists distinguish between the verbal channel—which uses word definitions to deliver meaning—and the vocal channel—which conveys emotion through subtle aural cues. The embedding of feelings in speech is called “emotional prosody,” and it’s no accident that the term prosody (“patterns of rhythm or sound”) originally belonged to poetry, which seeks multiple avenues of communication, direct and indirect. Frost believed that you could reverse-engineer vocal tones into written language, ordering words in ways that stimulated the imagining ear to hear precise slants of pitch. He went so far as to propose that sentences are “a notation for indicating tones of voice,” which “fly round” like “living things.”

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21832 - Posted: 01.28.2016

Bruce Bower Youngsters befuddled by printed squiggles on the pages of a storybook nonetheless understand that a written word, unlike a drawing, stands for a specific spoken word, say psychologist Rebecca Treiman of Washington University in St. Louis and her colleagues. Children as young as 3 can be tested for a budding understanding of writing’s symbolic meaning, the researchers conclude January 6 in Child Development. “Our results show that young children have surprisingly advanced knowledge about the fundamental properties of writing,” Treiman says. “This knowledge isn’t explicitly taught to children but probably gained through early exposure to print from sources such as books and computers.” Researchers and theorists have previously proposed that children who cannot yet read don’t realize that a written word corresponds to a particular spoken word. Studies have found, for instance, that nonliterate 3- to 5-year-olds often assign different meanings to the same word, such as girl, depending on whether that word appears under a picture of a girl or a cup. Treiman’s investigation “is the first to show that kids as young as 3 have the insight that print stands for something beyond what’s scripted on the page,” says psychologist Kathy Hirsh-Pasek of Temple University in Philadelphia. Preschoolers who are regularly read to have an advantage in learning that written words have specific meanings, suspects psychologist Roberta Golinkoff of the University of Delaware in Newark. © Society for Science & the Public 2000 - 2015.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 21761 - Posted: 01.08.2016