Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 689

by Alex Horton Michelle Myers's accent is global, but she has never left the country. The Arizona woman says she has gone to bed with extreme headaches in the past and woke up speaking with what sounds like a foreign accent. At various points, Australian and Irish accents have inexplicably flowed from her mouth for about two weeks, then disappeared, Myers says. But a British accent has lingered for two years, the 45-year-old Arizona woman told ABC affiliate KNXV. And one particular person seems to come to mind when she speaks. “Everybody only sees or hears Mary Poppins,” Myers told the station. Myers says she has been diagnosed with Foreign Accent Syndrome. The disorder typically occurs after strokes or traumatic brain injuries damage the language center of a person's brain — to the degree that their native language sounds like it is tinged with a foreign accent, according to the Center for Communication Disorders at the University of Texas at Dallas. In some instances, speakers warp the typical rhythm of their language and stress of certain syllables. Affected people may also cut out articles such as “the” and drop letters, turning an American “yeah” into a Scandinavian “yah,” for instance. Sheila Blumstein, a Brown University linguist who has written extensively on FAS, said sufferers typically produce grammatically correct language, unlike many stroke or brain-injury victims, she told The Washington Post for a 2010 article about a Virginia woman who fell down a stairwell, rattled her brain and awoke speaking with a Russian-like accent. © 1996-2018 The Washington Post

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 24652 - Posted: 02.13.2018

Rachel Hoge I’ve heard the misconceptions for most of my life. “Just slow down,” a stranger told me as a child. “You’re talking too fast – that’s why you stutter!” Later on, as my stutter carried on into adolescence and adulthood, strangers and loved ones alike offered up their own judgments of my speech –usually incorrect. Some have good intentions when it comes to sharing their opinions about my stutter. Others ... not so much. But everyone shares one defining characteristic: they’re misinformed. Stuttering is a communication and disfluency disorder where the flow of speech is interrupted. Though all speakers will experience a small amount of disfluency while speaking, a person who stutters (PWS) experiences disfluency more noticeably, generally stuttering on at least 10% of their words. There are approximately 70 million people who stutter worldwide, which is about 1% of the population. Stuttering usually begins in childhood between the ages of two and five, with about 5% of all children experiencing a period of stuttering that lasts six months or more. Three-quarters of children who stutter will recover by late childhood, but those who don’t may develop a lifelong condition. The male-to-female ratio of people who stutter is four to one, meaning there is a clear gender discrepancy that scientists are still attempting to understand. The severity of a stutter can vary greatly. The way it manifests can also differ, depending on the individual. Certain sounds and syllables can produce repetitions (re-re-re-repetitions), prolongations (ppppppprolongations), and/or abnormal stoppages (no sound). © 2018 Guardian News and Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 24647 - Posted: 02.12.2018

By Katarina Zimmer Scientists can trace the evolutionary histories of bats and humans back to a common ancestor that lived some tens of millions of years ago. And on the surface, those years of evolutionary divergence have separated us from the winged mammals in every way possible. But look on a sociobehavioral level, as some bat researchers are doing, and the two animal groups share much more than meets the eye. Like humans, bats form huge congregations of up to millions of individuals at a time. On a smaller scale, they form intimate social bonds with one another. And recently, scientists have suggested that bats are capable of vocal learning—the ability to modify vocalizations after hearing sounds. Researchers long considered this skill to be practiced only by humans, songbirds, and cetaceans, but have more recently identified examples of vocal learning in seals, sea lions, elephants—and now, bats. In humans, vocal learning can take the form of adopting styles of speech—for example, if a Brit were to pick up an Australian accent after moving down under. Yossi Yovel, a physicist turned bat biologist at Tel Aviv University who has long been fascinated by animal behavior, recently demonstrated that bat pups can acquire “dialects” in a similar way. © 1986-2018 The Scientist

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24529 - Posted: 01.16.2018

By James Hartzell A hundred dhoti-clad young men sat cross-legged on the floor in facing rows, chatting amongst themselves. At a sign from their teacher the hall went quiet. Then they began the recitation. Without pause or error, entirely from memory, one side of the room intoned one line of the text, then the other side of the room answered with the next line. Bass and baritone voices filled the hall with sonorous prosody, every word distinctly heard, their right arms moving together to mark pitch and accent. The effect was hypnotic, ancient sound reverberating through the room, saturating brain and body. After 20 minutes they halted, in unison. It was just a demonstration. The full recitation of one of India´s most ancient Sanskrit texts, the Shukla Yajurveda, takes six hours. I spent many years studying and translating Sanskrit, and became fascinated by its apparent impact on mind and memory. In India's ancient learning methods textual memorization is standard: traditional scholars, or pandits, master many different types of Sanskrit poetry and prose texts; and the tradition holds that exactly memorizing and reciting the ancient words and phrases, known as mantras, enhances both memory and thinking. I had also noticed that the more Sanskrit I studied and translated, the better my verbal memory seemed to become. Fellow students and teachers often remarked on my ability to exactly repeat lecturers’ own sentences when asking them questions in class. Other translators of Sanskrit told me of similar cognitive shifts. So I was curious: was there actually a language-specific “Sanskrit effect” as claimed by the tradition? © 2018 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 24479 - Posted: 01.03.2018

By DANIEL T. WILLINGHAM Americans are not good readers. Many blame the ubiquity of digital media. We’re too busy on Snapchat to read, or perhaps internet skimming has made us incapable of reading serious prose. But Americans’ trouble with reading predates digital technologies. The problem is not bad reading habits engendered by smartphones, but bad education habits engendered by a misunderstanding of how the mind reads. Just how bad is our reading problem? The last National Assessment of Adult Literacy from 2003 is a bit dated, but it offers a picture of Americans’ ability to read in everyday situations: using an almanac to find a particular fact, for example, or explaining the meaning of a metaphor used in a story. Of those who finished high school but did not continue their education, 13 percent could not perform simple tasks like these. When things got more complex — in comparing two newspaper editorials with different interpretations of scientific evidence or examining a table to evaluate credit card offers — 95 percent failed. There’s no reason to think things have gotten better. Scores for high school seniors on the National Assessment of Education Progress reading test haven’t improved in 30 years. Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge. All prose has factual gaps that must be filled by the reader. Consider “I promised not to play with it, but Mom still wouldn’t let me bring my Rubik’s Cube to the library.” The author has omitted three facts vital to comprehension: you must be quiet in a library; Rubik’s Cubes make noise; kids don’t resist tempting toys very well. If you don’t know these facts, you might understand the literal meaning of the sentence, but you’ll miss why Mom forbade the toy in the library. Knowledge also provides context. For example, the literal meaning of last year’s celebrated fake-news headline, “Pope Francis Shocks World, Endorses Donald Trump for President,” is unambiguous — no gap-filling is needed. But the sentence carries a different implication if you know anything about the public (and private) positions of the men involved, or you’re aware that no pope has ever endorsed a presidential candidate. © 2017 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 24363 - Posted: 11.26.2017

Laura Sanders Around the six-month mark, babies start to get really fun. They’re not walking or talking, but they are probably babbling, grabbing and gumming, and teaching us about their likes and dislikes. I remember this as the time when my girls’ personalities really started making themselves known, which, really, is one of the best parts of raising a kid. After months of staring at those beautiful, bald heads, you start to get a glimpse of what’s going on inside them. When it comes to learning language, it turns out that a lot has already happened inside those baby domes by age 6 months. A new study finds that babies this age understand quite a bit about words — in particular, the relationships between nouns. Work in toddlers, and even adults, reveals that people can struggle with word meanings under difficult circumstances. We might briefly falter with “shoe” when an image of a shoe is shown next to a boot, for instance, but not when the shoe appears next to a hat. But researchers wanted to know how early these sorts of word relationships form. Psychologists Elika Bergelson of Duke University and Richard Aslin, formerly of the University of Rochester in New York and now at Haskins Laboratories in New Haven, Conn., put 51 6-month-olds to a similar test. Outfitted with eye-tracking gear, the babies sat on a parent’s lap and looked at a video screen that showed pairs of common objects. Sometimes the images were closely related: mouth and nose, for instance, or bottle and spoon. Other pairs were unrelated: blanket and dog, or juice and car. © Society for Science and the Public

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 15: Language and Lateralization
Link ID: 24343 - Posted: 11.21.2017

By Rachel Hoge I was waiting in line at my bank’s drive-up service, hoping to make a quick withdrawal. I debated my options: two vacant service lines and one busy one for the ATM. The decision was easy: Wait in the line and deal with a machine. I have a speech disability — a stutter — and interactions with strangers have the potential to be, at the very least, extremely awkward; at worst, I have been mocked, insulted, misjudged or refused service. I avoid interacting with new people, fearful of their judgment. Using the ATM offered me more than just convenience. But the ATM, I soon discovered, was going down for maintenance. I could either leave, returning on a day when the machine was back in service, or speak with a bank teller. Once again, I debated my options. I needed the cash and I was feeling optimistic, so I pulled into the service line. I quickly rehearsed all acceptable variations of what I had to say: I need to withdraw some money from my checking account. Or maybe, to use fewer words: Could I have a withdrawal slip? Or straight to the point: Withdraw, please. Rachel Hoge would rather be treated with patience than with pity. (Katy Nash) I pulled my car forward. Glancing at the teller, I took a deep breath and managed to blurt out: “Can I ppppplease make a wi-wi-with-with-withdrawal?” The teller smiled on the other side of the glass. “Sure,” she said. I wasn’t sure if she had noticed my stutter or simply believed my repetitions (rep-rep-repetitions) and prolongations (ppppprolongations) were just indications of being tongue-tied rather than manifestations of a persistent stutter. I eased back in my seat, trying to relax. © 1996-2017 The Washington Post

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 24296 - Posted: 11.06.2017

/ By Elizabeth Svoboda When Gerald Shea was 6 years old, a bout of scarlet fever left him partially deaf, though he was not formally diagnosed until turning 34. His disability left him in a liminal space between silence and sound; he grew used to the fuzzed edges of words, the strain of parsing a language that no longer felt fully native. Years ago, he began combing historical records on deafness to lend context to his own experience. But his research turned up something unexpected: a centuries-long procession of leaders and educators who stifled the deaf by forcing them to conform to the ways of the hearing. That is the driving impetus behind “The Language of Light,” Shea’s history of deaf people’s ongoing quest to learn and communicate in signed languages. “Theirs is not an unplanned but a natural, visual poetry, at once both the speech and the music of the Deaf,” he writes. (He capitalizes the word to refer to people who consider themselves part of the deaf culture and community.) In conveying the unique cadence of this silent music — its intricate grammatical structure, its power to express an infinite array of ideas — Shea underscores the tragedy of its suppression. From the outset, he confronts us with a rogues’ gallery of those who suppressed it. During the Middle Ages, self-appointed therapists crammed hot coals into the mouths of deaf people, pierced their eardrums, and drilled holes into their skulls, all in a vain effort to force them to speak. In what was at the time Holland, Johann Conrad Amman moved the lips of his deaf charges into the shapes needed to make certain sounds, but the effort was largely fruitless because they could not hear the sounds they were making. The “silent voices” of Shea’s title has a double resonance: Not only did many deaf people remain literally mute from their disability; their potential to express their ideas fully through sign language went untapped. Copyright 2017 Undark

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 24283 - Posted: 11.03.2017

By Andy Coghlan How and when did we first become able to speak? A new analysis of our DNA reveals key evolutionary changes that reshaped our faces and larynxes, and which may have set the stage for complex speech. The alterations were not major mutations in our genes. Instead, they were tweaks in the activity of existing genes that we shared with our immediate ancestors. These changes in gene activity seem to have given us flat faces, by retracting the protruding chins of our ape ancestors. They also resculpted the larynx and moved it further down in the throat, allowing our ancestors to make sounds with greater subtleties. The study offers an unprecedented glimpse into how our faces and vocal tracts were altered at the genetic level, paving the way for the sophisticated speech we take for granted. However, other anthropologists say changes in the brain were at least equally important. It is also possible that earlier ancestors could speak, but in a more crude way, and that the facial changes simply took things up a notch. Liran Carmel of the Hebrew University of Jerusalem and his colleagues examined DNA from two modern-day people and four humans who lived within the last 50,000 years. They also looked at extinct hominins: two Neanderthals and a Denisovan. Finally, they looked at genetic material from six chimpanzees and data from public databases supplied by living people. © Copyright New Scientist Ltd.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 24004 - Posted: 08.28.2017

Jon Hamilton It's not just what you say that matters. It's how you say it. Take the phrase, "Here's Johnny." When Ed McMahon used it to introduce Johnny Carson on The Tonight Show, the words were an enthusiastic greeting. But in The Shining, Jack Nicholson used the same two words to convey murderous intent. Now scientists are reporting in the journal Science that they have identified specialized brain cells that help us understand what a speaker really means. These cells do this by keeping track of changes in the pitch of the voice. "We found that there were groups of neurons that were specialized and dedicated just for the processing of pitch," says Dr. Eddie Chang, a professor of neurological surgery at the University of California, San Francisco. Chang says these neurons allow the brain to detect "the melody of speech," or intonation, while other specialized brain cells identify vowels and consonants. "Intonation is about how we say things," Chang says. "It's important because we can change the meaning, even — without actually changing the words themselves." For example, by raising the pitch of our voice at the end of a sentence, a statement can become a question. The identification of neurons that detect changes in pitch was largely the work of Claire Tang, a graduate student in Chang's lab and the Science paper's lead author. Tang and a team of researchers studied the brains of 10 epilepsy patients awaiting surgery. The patients had electrodes placed temporarily on the surface of their brains to help surgeons identify the source of their seizures. © 2017 npr

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 23996 - Posted: 08.25.2017

Emily Siner The Grand Ole Opry in Nashville, Tenn., is country music's Holy Land. It's home to the weekly radio show that put country music on the national map in 1925. And it's where this summer, 30 people with Williams syndrome eagerly arrived backstage. Williams syndrome is a rare genetic disorder that can cause developmental disabilities. People with the condition are often known for their outgoing personalities and their profound love of music. Scientists are still trying to figure out where this musical affinity comes from and how it can help them overcome their challenges. That's why 12 years ago, researchers at Vanderbilt University set up a summer camp for people with Williams syndrome. For a week every summer, campers come to Nashville to immerse themselves in country music and participate in cutting-edge research. This isn't the only summer camp for people with Williams syndrome, but it is unique in its distinctive country flair. It's organized by the Vanderbilt Kennedy Center, whose faculty and staff focus on developmental disabilities. Eight years ago, the Academy of Country Music's philanthropic arm, ACM Lifting Lives, started funding the program. Campers spend the week meeting musicians and visiting recording studios, even writing an original song. This year, they teamed up with one of country's hottest stars, Dierks Bentley, on that. And they get a backstage tour of the Grand Ole Opry led by Clancey Hopper, who has Williams syndrome herself and attended the Nashville camp for eight years before applying for a job at the Opry. © 2017 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 23904 - Posted: 08.01.2017

Ryan Kellman The modern Planet of the Apes reboot begins with a research chimpanzee being raised in an American home. It's a pretty plausible premise — that exact scenario has played out in the real world many times. On June 26, 1931, for example, Luella and Winthrop Kellogg pulled a baby female chimpanzee away from her mother and brought her to live in their home in Orange Park, Fla. The Kelloggs were comparative psychologists. Their plan was to raise the chimpanzee, Gua, alongside their own infant son, Donald, and see if she picked up human language. According to the book they wrote about the experiment, Luella wasn't initially on board: ... the enthusiasm of one of us met with so much resistance from the other that it appeared likely we could never come to an agreement upon whether or not we should even attempt such an undertaking. But attempt it they did. The Kelloggs performed a slew of tests on Donald and Gua. How good were their reflexes? How many words did they recognize? How did they react to the sound of a gunshot? What sound did each infant's skull make when tapped by a spoon? (Donald's produced "a dull thud" while Gua's made the sound of a "mallet upon a wooden croquet ball.") Chimpanzees develop faster than humans, so Gua outshone Donald when it came to most tasks. She even learned to respond to English phrases like "Don't touch!" and "Get down!" But unlike the apes in the movies, Gua never learned to speak. © 2017 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 23871 - Posted: 07.25.2017

By Virginia Morell Baby marmosets learn to make their calls by trying to repeat their parents’ vocalizations, scientists report today in Current Biology. Humans were thought to be the only primate with vocal learning—the ability to hear a sound and repeat it, considered essential for speech. When our infants babble, they make apparently random sounds, which adults respond to with words or other sounds; the more this happens, the faster the baby learns to talk. To find out whether marmosets (Callithrix jacchus, pictured) do something similar, scientists played recordings of parental calls during a daily 30-minute session to three sets of newborn marmoset twins until they were 2 months old (roughly equivalent to a 2-year-old human). Baby marmosets make noisy guttural cries; adults respond with soft “phee” contact calls (listen to their calls below). The baby that consistently heard its parents respond to its cries learned to make the adult “phee” sound much faster than did its twin, the team found. It’s not yet known if this ability is limited to the marmosets; if so, the difference may be due to the highly social lives of these animals, where, like us, multiple relatives help care for babies. © 2017 American Association for the Advancement of Science

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 23662 - Posted: 05.26.2017

Gary Stix Illiterate women in northern Indian learned how to read and write in Hindi for six months after which they had reached a level comparable to a first-grader. Credit: Max Planck Institute for Human Cognitive and Brain Sciences The brain did not evolve to read. It uses the neural muscle of pre-existing visual and language processing areas to enable us to take in works by Tolstoy and Tom Clancy. Reading, of course, begins in the first years of schooling, a time when these brain regions are still in development. What happens, though, when an adult starts learning after the age of 30? A study published May 24 in Science Advances turned up a few unexpected findings. In the report, a broad-ranging group of researchers—from universities in Germany, India and the Netherlands—taught reading to 21 women, all about 30 years of age from near the city of Lucknow in northern India, comparing them to a placebo group of nine women. The majority of those who learned to read could not recognize a word of Hindi at the beginning of the study. After six months, the group had reached a first-grade proficiency level. When the researchers conducted brain scans—using functional magnetic resonance imaging—they were startled. Areas deep below the wrinkled surface, the cortex, in the brains of the new learners had changed. Their results surprised them because most reading-related brain activity was thought to involve the cortex. The new research may overturn this presumption and may pertain pertain to child learners as well. After being filtered through the eyes, visual information may move first to evolutionarily ancient brain regions before being relayed to the visual and language areas of the cortex typically associated with reading. © 2017 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 23661 - Posted: 05.25.2017

By FERRIS JABR Con Slobodchikoff and I approached the mountain meadow slowly, obliquely, softening our footfalls and conversing in whispers. It didn’t make much difference. Once we were within 50 feet of the clearing’s edge, the alarm sounded: short, shrill notes in rapid sequence, like rounds of sonic bullets. We had just trespassed on a prairie-dog colony. A North American analogue to Africa’s meerkat, the prairie dog is trepidation incarnate. It lives in subterranean societies of neighboring burrows, surfacing to forage during the day and rarely venturing more than a few hundred feet from the center of town. The moment it detects a hawk, coyote, human or any other threat, it cries out to alert the cohort and takes appropriate evasive action. A prairie dog’s voice has about as much acoustic appeal as a chew toy. French explorers called the rodents petits chiens because they thought they sounded like incessantly yippy versions of their pets back home. On this searing summer morning, Slobodchikoff had taken us to a tract of well-trodden wilderness on the grounds of the Museum of Northern Arizona in Flagstaff. Distressed squeaks flew from the grass, but the vegetation itself remained still; most of the prairie dogs had retreated underground. We continued along a dirt path bisecting the meadow, startling a prairie dog that was peering out of a burrow to our immediate right. It chirped at us a few times, then stared silently. “Hello,” Slobodchikoff said, stooping a bit. A stout bald man with a scraggly white beard and wine-dark lips, Slobodchikoff speaks with a gentler and more lilting voice than you might expect. “Hi, guy. What do you think? Are we worth calling about? Hmm?” Slobodchikoff, an emeritus professor of biology at Northern Arizona University, has been analyzing the sounds of prairie dogs for more than 30 years. Not long after he started, he learned that prairie dogs had distinct alarm calls for different predators. Around the same time, separate researchers found that a few other species had similar vocabularies of danger. What Slobodchikoff claimed to discover in the following decades, however, was extraordinary: Beyond identifying the type of predator, prairie-dog calls also specified its size, shape, color and speed; the animals could even combine the structural elements of their calls in novel ways to describe something they had never seen before. No scientist had ever put forward such a thorough guide to the native tongue of a wild species or discovered one so intricate. Prairie-dog communication is so complex, Slobodchikoff says — so expressive and rich in information — that it constitutes nothing less than language.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 23606 - Posted: 05.12.2017

By Jane C. Hu New evidence suggests that the earliest traces of a language can stay with us into adulthood, even if we no longer speak or understand the language itself. And early exposure also seems to speed the process of relearning it later in life. In the new study, recently published in Royal Society Open Science, Dutch adults were trained to listen for sound contrasts in Korean. Some participants reported no prior exposure to the language; others were born in Korea and adopted by Dutch families before the age of six. All participants said they could not speak Korean, but the adoptees from Korea were better at distinguishing between the contrasts and more accurate in pronouncing Korean sounds. “Language learning can be retained subconsciously, even if conscious memories of the language do not exist,” says Jiyoun Choi, postdoctoral fellow at Hanyang University in Seoul and lead author of the study. And it appears that just a brief period of early exposure benefits learning efforts later; when Choi and her collaborators compared the results of people adopted before they were six months old with results of others adopted after 17 months, there were no differences in their hearing or speaking abilities. “It's exciting that these effects are seen even among adults who were exposed to Korean only up to six months of age—an age before which babbling emerges,” says Janet Werker, a professor of psychology at the University of British Columbia, who was not involved with the research. Remarkably, what we learn before we can even speak stays with us for decades. © 2017 Scientific American,

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 15: Language and Lateralization
Link ID: 23598 - Posted: 05.10.2017

By Mo Costandi The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality. The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately. The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions. © 2017 Scientific American,

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 23558 - Posted: 05.01.2017

By Cormac McCarthy I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.” Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter. A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal. All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems. © 2017 NautilusThink Inc,

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 23525 - Posted: 04.22.2017

by Laura Sanders The way babies learn to speak is nothing short of breathtaking. Their brains are learning the differences between sounds, rehearsing mouth movements and mastering vocabulary by putting words into meaningful context. It’s a lot to fit in between naps and diaper changes. A recent study shows just how durable this early language learning is. Dutch-speaking adults who were adopted from South Korea as preverbal babies held on to latent Korean language skills, researchers report online January 18 in Royal Society Open Science. In the first months of their lives, these people had already laid down the foundation for speaking Korean — a foundation that persisted for decades undetected, only revealing itself later in careful laboratory tests. Researchers tested how well people could learn to identify and speak tricky Korean sounds. “For Korean listeners, these sounds are easy to distinguish, but for second-language learners they are very difficult to master,” says study coauthor Mirjam Broersma, a psycholinguist of Radboud University in Nijmegen, Netherlands. For instance, a native Dutch speaker would listen to three distinct Korean sounds and hear only the same “t” sound. Broersma and her colleagues compared the language-absorbing skills of a group of 29 native Dutch speakers to 29 South Korea-born Dutch speakers. Half of the adoptees moved to the Netherlands when they were older than 17 months — ages at which the kids had probably begun talking. The other half were adopted as preverbal babies younger than 6 months. As a group, the South Korea-born adults outperformed the native-born Dutch adults, more easily learning both to recognize and speak the Korean sounds. |© Society for Science & the Public 2000 - 2017

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 23455 - Posted: 04.06.2017

By Matt Reynolds Google’s latest take on machine translation could make it easier for people to communicate with those speaking a different language, by translating speech directly into text in a language they understand. Machine translation of speech normally works by first converting it into text, then translating that into text in another language. But any error in speech recognition will lead to an error in transcription and a mistake in the translation. Researchers at Google Brain, the tech giant’s deep learning research arm, have turned to neural networks to cut out the middle step. By skipping transcription, the approach could potentially allow for more accurate and quicker translations. The team trained its system on hundreds of hours of Spanish audio with corresponding English text. In each case, it used several layers of neural networks – computer systems loosely modelled on the human brain – to match sections of the spoken Spanish with the written translation. To do this, it analysed the waveform of the Spanish audio to learn which parts seemed to correspond with which chunks of written English. When it was then asked to translate, each neural layer used this knowledge to manipulate the audio waveform until it was turned into the corresponding section of written English. “It learns to find patterns of correspondence between the waveforms in the source language and the written text,” says Dzmitry Bahdanau at the University of Montreal in Canada, who wasn’t involved with the work. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 23450 - Posted: 04.05.2017