Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 624

By Victoria Gill Science correspondent, BBC News Our primate cousins have surprised and impressed scientists in recent years, with revelations about monkeys' tool-using abilities and chimps' development of complex sign language. But researchers are still probing the question: why are we humans the only apes that can talk? That puzzle has now led to an insight into how different non-human primates' brains are "wired" for vocal ability. A new study has compared different primate species' brains. It revealed that primates with wider "vocal repertoires" had more of their brain dedicated to controlling their vocal apparatus. That suggests that our own speaking skills may have evolved as our brains gradually rewired to control that apparatus, rather than purely because we're smarter than non-human apes. Humans and other primates have very similar vocal anatomy - in terms of their tongues and larynx. That's the physical machinery in the throat which allows us to turn air into sound. So, as lead researcher Dr Jacob Dunn from Anglia Ruskin University in Cambridge explained, it remains a mystery that only human primates can actually talk. "That's likely due to differences in the brain," Dr Dunn told BBC News, "but there haven't been comparative studies across species." So how do our primate brains differ? That comparison is exactly what Dr Dunn and his colleague Prof Jeroen Smaers set out to do. They ranked 34 different primate species based on their vocal abilities - the number of distinct calls they are known to make in the wild. They then examined the brain of each species, using information from existing, preserved brains that had been kept for research. © 2018 BBC

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25314 - Posted: 08.10.2018

Tina Hesman Saey Humans’ gift of gab probably wasn’t the evolutionary boon that scientists once thought. There’s no evidence that FOXP2, sometimes called “the language gene,” gave humans such a big evolutionary advantage that it was quickly adopted across the species, what scientists call a selective sweep. That finding, reported online August 2 in Cell, follows years of debate about the role of FOXP2 in human evolution. In 2002, the gene became famous when researchers thought they had found evidence that a tweak in FOXP2 spread quickly to all humans — and only humans — about 200,000 years ago. That tweak swapped two amino acids in the human version of the gene for ones different than in other animals’ versions of the gene. FOXP2 is involved in vocal learning in songbirds, and people with mutations in the gene have speech and language problems. Many researchers initially thought that the amino acid swap was what enabled humans to speak. Speech would have given humans a leg up on competition from Neandertals and other ancient hominids. That view helped make FOXP2 a textbook example of selective sweeps. Some researchers even suggested that FOXP2 was the gene that defines humans, until it became clear that the gene did not allow humans to settle the world and replace other hominids, says archeaogeneticist Johannes Krause at the Max Planck Institute for the Science of Human History in Jena, Germany, who was not involved in the study. “It was not the one gene to rule them all.” |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25293 - Posted: 08.04.2018

Matthew Warren The evolution of human language was once thought to have hinged on changes to a single gene that were so beneficial that they raced through ancient human populations. But an analysis now suggests that this gene, FOXP2, did not undergo changes in Homo sapiens’ recent history after all — and that previous findings might simply have been false signals. “The situation’s a lot more complicated than the very clean story that has been making it into textbooks all this time,” says Elizabeth Atkinson, a population geneticist at the Broad Institute of Harvard and MIT in Cambridge, Massachusetts, and a co-author of the paper, which was published on 2 August in Cell1. Originally discovered in a family who had a history of profound speech and language disorders, FOXP2 was the first gene found to be involved in language production2. Later research touted its importance to the evolution of human language. A key 2002 paper found that humans carry two mutations to FOXP2 not found in any other primates3. When the researchers looked at genetic variation surrounding these mutations, they found the signature of a ‘selective sweep’ — in which a beneficial mutation quickly becomes common across a population. This change to FOXP2 seemed to have happened in the past 200,000 years, the team reported in Nature. The paper has been cited hundreds of times in the scientific literature. © 2018 Springer Nature Limited.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25292 - Posted: 08.04.2018

By Michael Erard , Catherine Matacic If you want a no-fuss, no-muss pet, consider the Bengalese finch. Dubbed the society finch for its friendliness, breeders often use it to foster unrelated chicks. But put the piebald songbird next to its wild ancestor, the white-rumped munia, and you can both see and hear the differences: The aggressive munia tends to be darker and whistles a scratchy, off-kilter tune, whereas the pet finch warbles a melody so complex that even nonmusicians may wonder how this caged bird learned to sing. All this makes the domesticated and wild birds a perfect natural experiment to help explore an upstart proposal about human evolution: that the building blocks of language are a byproduct of brain alterations that arose when natural selection favored cooperation among early humans. According to this hypothesis, skills such as learning complex calls, combining vocalizations, and simply knowing when another creature wants to communicate all came about as a consequence of pro-social traits like kindness. If so, domesticated animals, which are bred to be good-natured, might exhibit such communication skills too. The idea is rooted in a much older one: that humans tamed themselves. This self-domestication hypothesis, which got its start with Charles Darwin, says that when early humans started to prefer cooperative friends and mates to aggressive ones, they essentially domesticated themselves. Along with tameness came evolutionary changes seen in other domesticated mammals—smoother brows, shorter faces, and more feminized features—thanks in part to lower levels of circulating androgens (such as testosterone) that tend to promote aggression. © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 8: Hormones and Sex
Link ID: 25289 - Posted: 08.03.2018

Sara Kiley Watson Read these sentences aloud: I never said she stole my money. I never said she stole my money. I never said she stole my money. Emphasizing any one of the words over the others makes the string of words mean something completely different. "Pitch change" — the vocal quality we use to emphasize words — is a crucial part of human communication, whether spoken or sung. Recent research from Dr. Edward Chang's lab at the University of California, San Francisco's epilepsy center has narrowed down which part of the brain controls our ability to regulate the pitch of our voices when we speak or sing— the part that enables us to differentiate between the utterances "Let's eat, Grandma" and "Let's eat Grandma." Scientists already knew, more or less, what parts of the brain are engaged in speech, says Chang, a professor of neurological surgery. What the new research has allowed, he says, is a better understanding of the neural code of pitch and its variations — how information about pitch is represented in the brain. Chang's team was able to study these neural codes with the help of a particular group of study volunteers: epilepsy patients. Chang treats people whose seizures can't be medically controlled; these patients need surgery to stop the misfiring neurons. He puts electrodes in each patient's brain to help guide the scalpel during their surgery. © 2018 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25265 - Posted: 07.28.2018

by Erin Blakemore What if you wanted to speak but couldn’t string together recognizable words? What if someone spoke to you but you couldn’t understand what they were saying? These situations aren’t hypothetical for the more than 1 million Americans with aphasia, which affects the ability to understand and speak with others. Aphasia occurs in people who have had strokes, traumatic brain injuries or other brain damage. Some victims have a scrambled vocabulary or are unable to express themselves; others find it hard to make sense of the words they read or hear. The disorder doesn’t reduce intelligence, only a person’s ability to communicate. And although there is no definitive cure, it can be treated. Many people make significant recoveries from aphasia after a stroke, for example. July is Aphasia Awareness Month, a fine time to learn more about the disorder. The TED-Ed series offers a lesson on aphasia, complete with an engaging video that describes the condition, its causes and its treatment, along with a quiz, discussion questions and other resources. Created by Susan Wortman-Jutt, a speech-language pathologist who treats aphasia, it’s a good introduction to the disorder and how damage to the brain’s language centers can hamper an individual’s ability to communicate. Another resource is the National Aphasia Association. Its website, aphasia.­org, contains information about the disorder and links to support and treatment options. Aphasia can have lasting effects, but there is hope for people whose brains are injured. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25180 - Posted: 07.07.2018

Philip Lieberman In the 1960s, researchers at Yale University’s Haskins Laboratories attempted to produce a machine that would read printed text aloud to blind people. Alvin Liberman and his colleagues figured the solution was to isolate the “phonemes,” the ostensible beads-on-a-string equivalent to movable type that linguists thought existed in the acoustic speech signal. Linguists had assumed (and some still do) that phonemes were roughly equivalent to the letters of the alphabet and that they could be recombined to form different words. However, when the Haskins group snipped segments from tape recordings of words or sentences spoken by radio announcers or trained phoneticians, and tried to link them together to form new words, the researchers found that the results were incomprehensible.1 That’s because, as most speech scientists agree, there is no such thing as pure phonemes (though some linguists still cling to the idea). Discrete phonemes do not exist as such in the speech signal, and instead are always blended together in words. Even “stop consonants,” such as [b], [p], [t], and [g], don’t exist as isolated entities; it is impossible to utter a stop consonant without also producing a vowel before or after it. As such, the consonant [t] in the spoken word tea, for example, sounds quite different from that in the word to. To produce the vowel sound in to, the speakers’ lips are protruded and narrowed, while they are retracted and open for the vowel sound in tea, yielding different acoustic representations of the initial consonant. Moreover, when the Haskins researchers counted the number of putative phonemes that would be transmitted each second during normal conversations, the rate exceeded that which can be interpreted by the human auditory system—the synthesized phrases would have become an incomprehensible buzz. © 1986 - 2018 The Scientist.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25176 - Posted: 07.06.2018

Geoffrey Pullum One area outshines all others in provoking crazy talk about language in the media, and that is the idea of language acquisition in nonhuman species. On June 19 came the sad news of the death of Koko, the western lowland gorilla cared for by Francine “Penny” Patterson at a sanctuary in the Santa Cruz Mountains. Many obituaries appeared, and the press indulged as never before in sentimental nonsense about talking with the animals. Credulous repetition of Koko’s mythical prowess in sign language was everywhere. Jeffrey Kluger’s essay in Time was unusually extreme in its blend of emotion, illogicality, wishful thinking, and outright falsehood. Koko, he tells us, once made a sequence of hand signs that Patterson interpreted as “you key there me cookie“; and Kluger calls it “impressive … for the clarity of its meaning.” Would you call it clear and meaningful if it were uttered by an adult human? As always with the most salient cases of purported ape signing, Koko was flailing around producing signs at random in a purely situation-bound bid to obtain food from her trainer, who was in control of a locked treat cabinet. The fragmentary and anecdotal evidence about Koko’s much-prompted and much-rewarded sign usage was never sufficient to show that the gorilla even understood the meanings of individual signs — that key denotes a device intended to open locks, that the word cookie is not appropriately applied to muffins, and so on. © 2018 The Chronicle of Higher Education

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25155 - Posted: 06.29.2018

Davide Castelvecchi The 2016 film Arrival starred Amy Adams as a linguistics professor who was drafted in to communicate with aliens.Credit: Moviestore Coll./Alamy Sheri Wells-Jensen is fascinated by languages no one has ever heard — those that might be spoken by aliens. Last week, the linguist co-hosted a day-long workshop on this field of research, which sits at the boundary of astrobiology and linguistics. The meeting, at a conference of the US National Space Society in Los Angeles, California, was organized by Messaging Extraterrestrial Intelligence (METI). METI, which is funded by private donors, organizes the transmission of messages to other star systems. The effort is complementary to SETI (Search for Extraterrestrial Intelligence), which aims to detect messages from alien civilizations. METI targets star systems relatively close to the Sun that are known to host Earth-sized planets in their ‘habitable zone’ — where the conditions are right for liquid water to exist — using large radar dishes. Last year, it directed a radio message, which attempted to explain musical language, towards a nearby exoplanet system. The message started from basic arithmetic (encoded in binary as two radio wavelengths) and introduced increasingly complex concepts such as duration and frequency. Nature spoke to Wells-Jensen, who is a member of METI’s board of directors, about last week’s meeting and the field of alien linguistics. Was this the first workshop of this kind ever? We’ve done two workshops on communicating with aliens before, but this is the first one specifically about linguistics. If we do make contact, we should try and figure out what would be a reasonable first step in trying to communicate. Right now, we are trying to put our heads together and figure out what’s likely and what could be done after that. © 2018 Macmillan Publishers Limited,

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25053 - Posted: 06.04.2018

Anya Kamenetz "I want The Three Bears!" These days parents, caregivers and teachers have lots of options when it comes to fulfilling that request. You can read a picture book, put on a cartoon, play an audiobook, or even ask Alexa. A newly published study gives some insight into what may be happening inside young children's brains in each of those situations. And, says lead author Dr. John Hutton, there is an apparent "Goldilocks effect" — some kinds of storytelling may be "too cold" for children, while others are "too hot." And, of course, some are "just right." Hutton is a researcher and pediatrician at Cincinnati Children's Hospital with a special interest in "emergent literacy" — the process of learning to read. For the study, 27 children around age 4 went into an FMRI machine. They were presented with stories in three conditions: audio only; the illustrated pages of a storybook with an audio voiceover; and an animated cartoon. All three versions came from the Web site of Canadian author Robert Munsch. While the children paid attention to the stories, the MRI, the machine scanned for activation within certain brain networks, and connectivity between the networks. "We went into it with an idea in mind of what brain networks were likely to be influenced by the story," Hutton explains. One was language. One was visual perception. The third is called visual imagery. The fourth was the default mode network, which Hutton calls, "the seat of the soul, internal reflection — how something matters to you." The default mode network includes regions of the brain that appear more active when someone is not actively concentrating on a designated mental task involving the outside world. In terms of Hutton's "Goldilocks effect," here's what the researchers found: © 2018 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 25016 - Posted: 05.24.2018

Bruce Bower Language learning isn’t kid stuff anymore. In fact, it never was, a provocative new study concludes. A crucial period for learning the rules and structure of a language lasts up to around age 17 or 18, say psychologist Joshua Hartshorne of MIT and colleagues. Previous research had suggested that grammar-learning ability flourished in early childhood before hitting a dead end around age 5. If that were true, people who move to another country and try to learn a second language after the first few years of life should have a hard time achieving the fluency of native speakers. But that’s not so, Hartshorne’s team reports online May 2 in Cognition. In an online sample of unprecedented size, people who started learning English as a second language in an English-speaking country by age 10 to 12 ultimately mastered the new tongue as well as folks who had learned English and another language simultaneously from birth, the researchers say. Both groups, however, fell somewhat short of the grammatical fluency displayed by English-only speakers. After ages 10 to 12, new-to-English learners reached lower levels of fluency than those who started learning English at younger ages because time ran out when their grammar-absorbing ability plummeted starting around age 17. In another surprise, modest amounts of English learning among native and second-language speakers continued until around age 30, the investigators found, although most learning happened in the first 10 to 20 years of life. |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 24967 - Posted: 05.12.2018

By Dana G. Smith The older you get the more difficult it is to learn to speak French like a Parisian. But no one knows exactly what the cutoff point is—at what age it becomes harder, for instance, to pick up noun-verb agreements in a new language. In one of the largest linguistics studies ever conducted—a viral internet survey that drew two thirds of a million respondents—researchers from three Boston-based universities showed children are proficient at learning a second language up until the age of 18, roughly 10 years later than earlier estimates. But the study also showed that it is best to start by age 10 if you want to achieve the grammatical fluency of a native speaker. To parse this problem, the research team, which included psychologist Steven Pinker, collected data on a person’s current age, language proficiency and time studying English. The investigators calculated they needed more than half a million people to make a fair estimate of when the “critical period” for achieving the highest levels of grammatical fluency ends. So they turned to the world’s greatest experimental subject pool: the internet. They created a short online grammar quiz called Which English? that tested noun–verb agreement, pronouns, prepositions and relative clauses, among other linguistic elements. From the responses, an algorithm predicted the tester’s native language and which dialect of English (that is, Canadian, Irish, Australian) they spoke. For example, some of the questions included phrases a Chicagoan would deem grammatically incorrect but a Manitoban would think is perfectly acceptable English. © 2018 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 24938 - Posted: 05.05.2018

By Catherine Matacic Four years after Frank Seifart started documenting endangered dialects in Colombia, the guerillas came. In 2004, soldiers from the Revolutionary Armed Forces of Colombia swept past the Amazonian village where he did most of his fieldwork. The linguist reluctantly left for another village, south of the Peruvian border. When he got there, the chief was away. In the central roundhouse, an old man beat out a rhythm on two enormous drums: “A stranger has arrived. Come home.” And the chief did. It was the first time Seifart, now at the University of Cologne and the French National Center for Scientific Research in Lyon, had heard the traditional drums not just making music, but sending a message. Now, he and his colleagues have published the first in-depth study of how the drummers do it: Tiny variations in the time between beats match how words in the spoken language are vocalized. The finding, reported today in the Royal Society Open Science, reveals how the group known as the Bora can create complex drummed messages. It may also help explain how the rest of us “get” what others are saying at loud cocktail parties, by detecting those tiny variations in time even when other sounds are drowned out. “It is quite innovative,” says descriptive linguist Katarzyna Wojtylak, a postdoctoral research fellow at James Cook University in Cairns, Australia, who has studied the language and drumming systems of the Witoto, a related group. “Nobody has ever done such an extensive and detailed analysis of rhythm in a drummed language.” © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24900 - Posted: 04.25.2018

By Rachel R. Albert Parents are often their own worst critics when it comes to imparting knowledge to their children. Although helping with science fairs or homework assignments may come later on, the pressure comes early, as their infant starts to babble in increasingly word-like vocalizations. It’s easy to assume that children who can’t yet form a word are unable to understand what their parents are saying to them. But spend just a few minutes with an infant, and you quickly realize how rapidly the gears are turning. And new research by me and my colleagues Michael Goldstein and Jennifer Schwade at Cornell University, suggests these interactions are more sophisticated than we once thought. Parents’ responses to their baby’s babbling take on new significance at the age of about six months, when babies’ vocalizations start to mature. Around this age, babies become incredibly receptive to what they hear immediately after they babble. In fact, previous work from the B.A.B.Y. Lab at Cornell University suggests that if infants receive a response to their initial vocalization, they’re far more likely to vocalize again. Observations of mother-infant conversations have found that within 10 minutes of this type of exchange, children can be taught new vocalizations. For example, they can be taught to shift their consonant-vowel construction of “dada” into vowel-consonant “ada.” But what’s truly incredible about these exchanges is the level of influence babies have as actual conversation partners. © 2018 Scientific American,

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 24749 - Posted: 03.14.2018

By Roni Dengler AUSTIN—Babies are as primed to learn a visual language as they are a spoken one. That’s the conclusion of research presented here today at the annual meeting of AAAS, which publishes Science. Parents and scientists know babies are learning sponges that can pick up any language they’re born into. But not as much is known about whether that includes visual language. To find out if infants are sensitive to visual language, Rain Bosworth, a psychologist at the University of California, San Diego, tracked 6-month-olds’ and 1-year-olds’ eye movements as they watched a video of a woman performing self-grooming gestures, such as tucking her hair behind her ear, and signing. The infants watched the signs 20% more than the 1-year-old children did. That means babies can distinguish between what’s language and what’s not, even when it’s not spoken, but 1-year-olds can’t. That’s consistent with what researchers know about how babies learn spoken language. Six-month-olds home in on their native language and lose sensitivity to languages they’re not exposed to, but by 12 months old that’s more or less gone, Bosworth says. The researchers also watched babies’ gazes as they observed a signer “fingerspelling,” spelling out words with individually signed letters. The signer executed the fingerspelling cleanly or sloppily. Again, researchers found the 6-month-old babies, who had never seen sign language before, favored the well-formed letters, whereas the 12-month-olds did not show a preference. Together that means there’s a critical developmental window for picking up even nonverbal languages. As 95% of deaf children are born to hearing parents, they are at risk for developmental delays because they need that language exposure early on, the scientists say. © 2018 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 24672 - Posted: 02.17.2018

by Alex Horton Michelle Myers's accent is global, but she has never left the country. The Arizona woman says she has gone to bed with extreme headaches in the past and woke up speaking with what sounds like a foreign accent. At various points, Australian and Irish accents have inexplicably flowed from her mouth for about two weeks, then disappeared, Myers says. But a British accent has lingered for two years, the 45-year-old Arizona woman told ABC affiliate KNXV. And one particular person seems to come to mind when she speaks. “Everybody only sees or hears Mary Poppins,” Myers told the station. Myers says she has been diagnosed with Foreign Accent Syndrome. The disorder typically occurs after strokes or traumatic brain injuries damage the language center of a person's brain — to the degree that their native language sounds like it is tinged with a foreign accent, according to the Center for Communication Disorders at the University of Texas at Dallas. In some instances, speakers warp the typical rhythm of their language and stress of certain syllables. Affected people may also cut out articles such as “the” and drop letters, turning an American “yeah” into a Scandinavian “yah,” for instance. Sheila Blumstein, a Brown University linguist who has written extensively on FAS, said sufferers typically produce grammatically correct language, unlike many stroke or brain-injury victims, she told The Washington Post for a 2010 article about a Virginia woman who fell down a stairwell, rattled her brain and awoke speaking with a Russian-like accent. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24652 - Posted: 02.13.2018

Rachel Hoge I’ve heard the misconceptions for most of my life. “Just slow down,” a stranger told me as a child. “You’re talking too fast – that’s why you stutter!” Later on, as my stutter carried on into adolescence and adulthood, strangers and loved ones alike offered up their own judgments of my speech –usually incorrect. Some have good intentions when it comes to sharing their opinions about my stutter. Others ... not so much. But everyone shares one defining characteristic: they’re misinformed. Stuttering is a communication and disfluency disorder where the flow of speech is interrupted. Though all speakers will experience a small amount of disfluency while speaking, a person who stutters (PWS) experiences disfluency more noticeably, generally stuttering on at least 10% of their words. There are approximately 70 million people who stutter worldwide, which is about 1% of the population. Stuttering usually begins in childhood between the ages of two and five, with about 5% of all children experiencing a period of stuttering that lasts six months or more. Three-quarters of children who stutter will recover by late childhood, but those who don’t may develop a lifelong condition. The male-to-female ratio of people who stutter is four to one, meaning there is a clear gender discrepancy that scientists are still attempting to understand. The severity of a stutter can vary greatly. The way it manifests can also differ, depending on the individual. Certain sounds and syllables can produce repetitions (re-re-re-repetitions), prolongations (ppppppprolongations), and/or abnormal stoppages (no sound). © 2018 Guardian News and Media Limited

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24647 - Posted: 02.12.2018

By Katarina Zimmer Scientists can trace the evolutionary histories of bats and humans back to a common ancestor that lived some tens of millions of years ago. And on the surface, those years of evolutionary divergence have separated us from the winged mammals in every way possible. But look on a sociobehavioral level, as some bat researchers are doing, and the two animal groups share much more than meets the eye. Like humans, bats form huge congregations of up to millions of individuals at a time. On a smaller scale, they form intimate social bonds with one another. And recently, scientists have suggested that bats are capable of vocal learning—the ability to modify vocalizations after hearing sounds. Researchers long considered this skill to be practiced only by humans, songbirds, and cetaceans, but have more recently identified examples of vocal learning in seals, sea lions, elephants—and now, bats. In humans, vocal learning can take the form of adopting styles of speech—for example, if a Brit were to pick up an Australian accent after moving down under. Yossi Yovel, a physicist turned bat biologist at Tel Aviv University who has long been fascinated by animal behavior, recently demonstrated that bat pups can acquire “dialects” in a similar way. © 1986-2018 The Scientist

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24529 - Posted: 01.16.2018

By James Hartzell A hundred dhoti-clad young men sat cross-legged on the floor in facing rows, chatting amongst themselves. At a sign from their teacher the hall went quiet. Then they began the recitation. Without pause or error, entirely from memory, one side of the room intoned one line of the text, then the other side of the room answered with the next line. Bass and baritone voices filled the hall with sonorous prosody, every word distinctly heard, their right arms moving together to mark pitch and accent. The effect was hypnotic, ancient sound reverberating through the room, saturating brain and body. After 20 minutes they halted, in unison. It was just a demonstration. The full recitation of one of India´s most ancient Sanskrit texts, the Shukla Yajurveda, takes six hours. I spent many years studying and translating Sanskrit, and became fascinated by its apparent impact on mind and memory. In India's ancient learning methods textual memorization is standard: traditional scholars, or pandits, master many different types of Sanskrit poetry and prose texts; and the tradition holds that exactly memorizing and reciting the ancient words and phrases, known as mantras, enhances both memory and thinking. I had also noticed that the more Sanskrit I studied and translated, the better my verbal memory seemed to become. Fellow students and teachers often remarked on my ability to exactly repeat lecturers’ own sentences when asking them questions in class. Other translators of Sanskrit told me of similar cognitive shifts. So I was curious: was there actually a language-specific “Sanskrit effect” as claimed by the tradition? © 2018 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 24479 - Posted: 01.03.2018

By DANIEL T. WILLINGHAM Americans are not good readers. Many blame the ubiquity of digital media. We’re too busy on Snapchat to read, or perhaps internet skimming has made us incapable of reading serious prose. But Americans’ trouble with reading predates digital technologies. The problem is not bad reading habits engendered by smartphones, but bad education habits engendered by a misunderstanding of how the mind reads. Just how bad is our reading problem? The last National Assessment of Adult Literacy from 2003 is a bit dated, but it offers a picture of Americans’ ability to read in everyday situations: using an almanac to find a particular fact, for example, or explaining the meaning of a metaphor used in a story. Of those who finished high school but did not continue their education, 13 percent could not perform simple tasks like these. When things got more complex — in comparing two newspaper editorials with different interpretations of scientific evidence or examining a table to evaluate credit card offers — 95 percent failed. There’s no reason to think things have gotten better. Scores for high school seniors on the National Assessment of Education Progress reading test haven’t improved in 30 years. Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge. All prose has factual gaps that must be filled by the reader. Consider “I promised not to play with it, but Mom still wouldn’t let me bring my Rubik’s Cube to the library.” The author has omitted three facts vital to comprehension: you must be quiet in a library; Rubik’s Cubes make noise; kids don’t resist tempting toys very well. If you don’t know these facts, you might understand the literal meaning of the sentence, but you’ll miss why Mom forbade the toy in the library. Knowledge also provides context. For example, the literal meaning of last year’s celebrated fake-news headline, “Pope Francis Shocks World, Endorses Donald Trump for President,” is unambiguous — no gap-filling is needed. But the sentence carries a different implication if you know anything about the public (and private) positions of the men involved, or you’re aware that no pope has ever endorsed a presidential candidate. © 2017 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 24363 - Posted: 11.26.2017