Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 676

ByVirginia Morell Babies don’t babble to sound cute—they’re taking their first steps on the path to learning language. Now, a study shows parrot chicks do the same. Although the behavior has been seen in songbirds and two mammalian species, finding it in these birds is important, experts say, as they may provide the best nonhuman model for studying how we begin to learn language. The find is “exciting,” says Irene Pepperberg, a comparative psychologist at Hunter College not involved with the work. Pepperberg herself discovered something like babbling in a famed African gray parrot named Alex, which she studied for more than 30 years. By unearthing the same thing in another parrot species and in the wild, she says, the team has shown this ability is widespread in the birds. In this study, the scientists focused on green-rumped parrotlets (Forpus passerinus)—a smaller species than Alex, found from Venezuela to Brazil. The team investigated a population at Venezuela’s Hato Masaguaral research center, where scientists maintain more than 100 artificial nesting boxes. Like other parrots, songbirds, and humans (and a few other mammal species), parrotlets are vocal learners. They master their calls by listening and mimicking what they hear. The chicks in the new study started to babble at 21 days, according to camcorders installed in a dozen of their nests. They increased the complexity of their sounds dramatically over the next week, the scientists report today in the Proceedings of the Royal Society B. The baby birds uttered strings of soft peeps, clicks, and grrs, but they weren’t communicating with their siblings or parents, says lead author Rory Eggleston, a Ph.D. student at Utah State University. Rather, like a human infant babbling quietly in their crib, a parrotlet chick made the sounds alone (see video). Indeed, most chicks started their babbling bouts when their siblings were asleep, often doing so without even opening their beaks, says Eggleston, who spent hours analyzing videos of the birds. © 2022 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28343 - Posted: 06.01.2022

By Laura Sanders Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not. That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says. But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active. Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says. In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14. Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 15: Language and Lateralization
Link ID: 28307 - Posted: 04.30.2022

By Katharine Q. Seelye Ursula Bellugi, a pioneer in the study of the biological foundations of language who was among the first to demonstrate that sign language was just as complex, abstract and systematic as spoken language, died on Sunday in San Diego. She was 91. Her death, at an assisted living facility, was confirmed by her son Rob Klima. Dr. Bellugi was a leading researcher at the Salk Institute for Biological Studies in San Diego for nearly five decades and, for much of that time, was director of its laboratory for cognitive neuroscience. She made significant contributions in three main areas: the development of language in children; the linguistic structure and neurological basis of American Sign Language; and the social behavior and language abilities of people with a rare genetic disorder, Williams syndrome. “She leaves an indelible legacy of shedding light on how humans communicate and socialize with each other,” Rusty Gage, president of the Salk Institute, said in a statement. Dr. Bellugi’s work, much of it done in collaboration with her husband, Edward S. Klima, advanced understanding of the brain and the origins of language, both signed and spoken. American Sign Language was first described as a true language in 1960 by William C. Stokoe Jr., a professor at Gallaudet University, the world’s only liberal arts university devoted to deaf people. But he was ridiculed and attacked for that claim. Dr. Bellugi and Dr. Klima, who died in 2008, demonstrated conclusively that the world’s signed languages — of which there are more than 100 — were actual languages in their own right, not just translations of spoken languages. Dr. Bellugi, who focused on American Sign Language, established that these linguistic systems were passed down, in all their complexity, from one generation of deaf people to the next. For that reason, the scientific community regards her as the founder of the neurobiology of American Sign Language. The couple’s work led to a major discovery at the Salk lab: that the left hemisphere of the brain has an innate predisposition for language, whether spoken or signed. That finding gave scientists fresh insight into how the brain learns, interprets and forgets language. © 2022 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28296 - Posted: 04.23.2022

ByTess Joosse My dog Leo clearly knows the difference between my voice and the barks of the beagle next door. When I speak, he looks at me with love; when our canine neighbor makes his mind known, Leo barks back with disdain. A new study backs up what I and my fellow dog owners have long suspected: Dogs’ brains process human and canine vocalizations differently, suggesting they evolved to recognize our voices from their own. “The fact that dogs use auditory information alone to distinguish between human and dog sound is significant,” says Jeffrey Katz, a cognitive neuroscientist at Auburn University who is not involved with the work. Previous research has found that dogs can match human voices with expressions. When played an audio clip of a lady laughing, for example, they’ll often look at a photo of a smiling woman. But how exactly the canine brain processes sounds isn’t clear. MRI has shown certain regions of the dog brain are more active when a pup hears another dog whine or bark. But those images can’t reveal exactly when neurons in the brain are firing, and whether they fire differently in response to different noises. So in the new study, Anna Bálint, a canine neuroscientist at Eötvös Loránd University, turned to an electroencephalogram, which can measure individual brain waves. She and her colleagues recruited 17 family dogs, including several border collies, golden retrievers, and a German shepherd, that were previously taught to lie still for several minutes at a time. The scientists attached electrodes to each dog’s head to record its brain response—not an easy task, it turns out. Unlike humans’ bony noggins, dog heads have lots of muscles that can obstruct a clear readout, Bálint says. © 2022 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28270 - Posted: 04.06.2022

By Jessica Contrera The carpet cleaner heaves his machine up the stairs, untangles its hoses and promises to dump the dirty water only in the approved toilet. Another day scrubbing rugs for less than $20 an hour. Another Washington area house with overflowing bookshelves and walls covered in travel mementos from places he would love to go one day. But this was not that day. “Tell me about this stain,” 46-year-old Vaughn Smith asks his clients. “Well,” says one of the homeowners, “Schroeder rubbed his bottom across it.” Vaughn knows just what to do about that, and the couple, Courtney Stamm and Kelly Widelska, know they can trust him to do it. They’d been hiring him for years, once watching him erase even a splattered Pepto Bismol stain. But this time when Vaughn called to confirm their January appointment, he quietly explained that there was something about himself that he’d never told them. That he rarely told anyone. And well, a reporter was writing a story about it. Could he please bring her along? Now as they listen to Vaughn discuss the porousness of wool, and the difference between Scotchgard and sanitizer, they can’t help but look at him differently. Once the stool stain is solved, Kelly just has to ask. “So, how many languages do you speak?” “Oh goodness,” Vaughn says. “Eight, fluently.” “Eight?” Kelly marvels. “Eight,” Vaughn confirms. English, Spanish, Bulgarian, Czech, Portuguese, Romanian, Russian and Slovak. “But if you go by like, different grades of how much conversation,” he explains, “I know about 25 more.” Vaughn glances at me. He is still underselling his abilities. By his count, it is actually 37 more languages, with at least 24 he speaks well enough to carry on lengthy conversations. He can read and write in eight alphabets and scripts. He can tell stories in Italian and Finnish and American Sign Language. He’s teaching himself Indigenous languages, from Mexico’s Nahuatl © 1996-2022 The Washington Post

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 28269 - Posted: 04.06.2022

Jonathan Franklin Legendary actor Bruce Willis announced Wednesday his departure from the big screen following his diagnosis with aphasia, which is "impacting his cognitive abilities," his family said in a statement. While details of what led to Willis' aphasia diagnosis are unknown at this time, medical experts stress the importance of the brain condition and how its specifically treated — depending on its severity. "[At some point], people will know somebody who's had a stroke and has aphasia," Dr. Swathi Kiran, professor of neurorehabilitation at Boston University, told NPR. Bruce Willis stepping away from acting for health reasons, his family says Aphasia is defined as a condition that affects the ability to speak, write and understand language, according to the Mayo Clinic. The brain disorder can occur after strokes or head injuries — and can even lead in some cases to dementia. "As a result of this and with much consideration Bruce is stepping away from the career that has meant so much to him," his daughter, Rumer Willis, said on Instagram. "This is a really challenging time for our family and we are so appreciative of your continued love, compassion and support." Medical experts say the impacts of aphasia can vary, depending on the person's diagnosis. But mainly, the condition affects a person's ability to communicate — whether it's written, spoken or both. People living with aphasia can experience changes in their ability to communicate; as they may find difficulty finding words, using words out of order or will even speak in a short manner, according to the American Speech-Language-Hearing Association. © 2022 npr

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28264 - Posted: 04.02.2022

By Bruce Bower Human language, in its many current forms, may owe an evolutionary debt to our distant ape ancestors who sounded off in groups of scattered individuals. Wild orangutans’ social worlds mold how they communicate vocally, much as local communities shape the way people speak, researchers report March 21 in Nature Ecology & Evolution. This finding suggests that social forces began engineering an expanding inventory of communication sounds among ancient ancestors of apes and humans, laying a foundation for the evolution of language, say evolutionary psychologist Adriano Lameira, of the University of Warwick in England, and his colleagues. Lameira’s group recorded predator-warning calls known as “kiss-squeaks” — which typically involve drawing in breath through pursed lips — of 76 orangutans from six populations living on the islands of Borneo and Sumatra, where they face survival threats (SN: 2/15/18). The team tracked the animals and estimated their population densities from 2005 through 2010, with at least five consecutive months of observations and recordings in each population. Analyses of recordings then revealed how much individuals’ kiss-squeaks changed or remained the same over time. Orangutans in high-density populations, which up the odds of frequent social encounters, concoct many variations of kiss-squeaks, the researchers report. Novel reworkings of kiss-squeaks usually get modified further by other orangutans or drop out of use in crowded settings, they say. © Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 6: Evolution of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28258 - Posted: 03.30.2022

Alejandra Marquez Janse & Christopher Intagliata Imagine you're moving to a new country on the other side of the world. Besides the geographical and cultural changes, you will find a key difference will be the language. But will your pets notice the difference? It was a question that nagged at Laura Cuaya, a brain researcher at the Neuroethology of Communication Lab at at Eötvös Loránd University in Budapest. "When I moved from Mexico to Hungary to start my post-doc research, all was new for me. Obviously, here, people in Budapest speak Hungarian. So you've had a different language, completely different for me," she said. The language was also new to her two dogs: Kun Kun and Odín. "People are super friendly with their dogs [in Budapest]. And my dogs, they are interested in interacting with people," Cuaya said. "But I wonder, did they also notice people here ... spoke a different language?" Cuaya set out to find the answer. She and her colleagues designed an experiment with 18 volunteer dogs — including her two border collies — to see if they could differentiate between two languages. Kun Kun and Odín were used to hearing Spanish; the other dogs Hungarian. The dogs sat still within an MRI machine, while listening to an excerpt from the story The Little Prince. They heard one version in Spanish, and another in Hungarian. Then the scientists analyzed the dogs' brain activity. © 2022 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28145 - Posted: 01.08.2022

Jon Hamilton When baby mice cry, they do it to a beat that is synchronized to the rise and fall of their own breath. It's a pattern that researchers say could help explain why human infants can cry at birth — and how they learn to speak. Mice are born with a cluster of cells in the brainstem that appears to coordinate the rhythms of breathing and vocalizations, a team reports in the journal Neuron. If similar cells exist in human newborns, they could serve as an important building block for speech: the ability to produce one or many syllables between each breath. The cells also could explain why so many human languages are spoken at roughly the same tempo. "This suggests that there is a hardwired network of neurons that is fundamental to speech," says Dr. Kevin Yackle, the study's senior author and a researcher at the University of California, San Francisco. Scientists who study human speech have spent decades debating how much of our ability is innate and how much is learned. The research adds to the evidence that human speech relies — at least in part — on biological "building blocks" that are present from birth, says David Poeppel, a professor of psychology and neural science at New York University who was not involved in the study. But "there is just a big difference between a mouse brain and a human brain," Poeppel says. So the human version of this building block may not look the same. © 2022 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 8: Hormones and Sex
Link ID: 28144 - Posted: 01.08.2022

Daisy Yuhas Billions of people worldwide speak two or more languages. (Though the estimates vary, many sources assert that more than half of the planet is bilingual or multilingual.) One of the most common experiences for these individuals is a phenomenon that experts call “code switching,” or shifting from one language to another within a single conversation or even a sentence. This month Sarah Frances Phillips, a linguist and graduate student at New York University, and her adviser Liina Pylkkänen published findings from brain imaging that underscore the ease with which these switches happen and reveal how the neurological patterns that support this behavior are very similar in monolingual people. The new study reveals how code switching—which some multilingual speakers worry is “cheating,” in contrast to sticking to just one language—is normal and natural. Phillips spoke with Mind Matters editor Daisy Yuhas about these findings and why some scientists believe bilingual speakers may have certain cognitive advantages. Can you tell me a little bit about what drew you to this topic? I grew up in a bilingual household. My mother is from South Korea; my dad is African-American. So I grew up code switching a lot between Korean and English, as well as different varieties of English, such as African-American English and the more mainstream, standardized version. When you spend a lot of time code switching, and then you realize that this is something that is not well understood from a linguistic perspective, nor from a neurobiological perspective, you realize, “Oh, this is open territory.” © 2021 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28095 - Posted: 12.01.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 28058 - Posted: 10.30.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds. Simons Foundation All Rights Reserved © 2021

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28054 - Posted: 10.27.2021

By Sam Roberts Washoe was 10 months old when her foster parents began teaching her to talk, and five months later they were already trumpeting her success. Not only had she learned words; she could also string them together, creating expressions like “water birds” when she saw a pair of swans and “open flower” to gain admittance to a garden. Washoe was a chimpanzee. She had been born in West Africa, probably orphaned when her mother was killed, sold to a dealer, flown to the United States for use of testing by the Air Force and adopted by R. Allen Gardner and his wife, Beatrix. She was raised as if she were a human child. She craved oatmeal with onions and pumpkin pudding. “The object of our research was to learn how much chimps are like humans,” Professor Gardner told Nevada Today, a University of Nevada publication, in 2007. “To measure this accurately, chimps would be needed to be raised as human children, and to do that, we needed to share a common language.” Washoe ultimately learned some 200 words, becoming what researchers said was the first nonhuman to communicate using sign language developed for the deaf. Professor Gardner, an ethologist who, with his wife, raised the chimpanzee for nearly five years, died on Aug. 20 at his ranch near Reno, Nev. He was 91. His death was announced by the University of Nevada, Reno, where he had joined the faculty in 1963 and conducted his research until he retired in 2010. When scientific journals reported in 1967 that Washoe (pronounced WA-sho), named after a county in Nevada, had learned to recognize and use multiple gestures and expressions in sign language, the news electrified the world of psychologists and ethologists who study animal behavior. © 2021 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28013 - Posted: 10.02.2021

Christie Wilcox If it walks like a duck and talks like a person, it’s probably a musk duck (Biziura lobata)—the only waterfowl species known that can learn sounds from other species. The Australian species’ facility for vocal learning had been mentioned anecdotally in the ornithological literature; now, a paper published September 6 in Philosophical Transactions of the Royal Society B reviews and discusses the evidence, which includes 34-year-old recordings made of a human-reared musk duck named Ripper engaging in an aggressive display while quacking “you bloody fool.” Ripper quacking "you bloody fool" while being provoked by a person separated from him by a fence The Scientist spoke with the lead author on the paper, Leiden University animal behavior researcher Carel ten Cate, to learn more about these unique ducks and what their unexpected ability reveals about the evolution of vocal learning. The Scientist: What is vocal learning? Carel ten Cate: Vocal learning, as it is used in this case, is that animals and humans, they learn their sounds from experience. So they learn from what they hear around them, which will usually be the parents, but it can also be other individuals. And if they don’t get that sort of exposure, then they will be unable to produce species-specific vocalizations, or in the human case, speech sounds and proper spoken language. © 1986–2021 The Scientist.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27987 - Posted: 09.13.2021

By Carolyn Wilke Babies may laugh like some apes a few months after birth before transitioning to chuckling more like human adults, a new study finds. Laughter links humans to great apes, our evolutionary kin (SN: 6/4/09). Human adults tend to laugh while exhaling (SN: 6/10/15), but chimpanzees and bonobos mainly laugh in two ways. One is like panting, with sound produced on both in and out breaths, and the other has outbursts occurring on exhales, like human adults. Less is known about how human babies laugh. So Mariska Kret, a cognitive psychologist at Leiden University in the Netherlands, and colleagues scoured the internet for videos with laughing 3- to 18-month-olds, and asked 15 speech sound specialists and thousands of novices to judge the babies’ laughs. After evaluating dozens of short audio clips, experts and nonexperts alike found that younger infants laughed during inhalation and exhalation, while older infants laughed more on the exhale. That finding suggests that infants’ laughter becomes less apelike with age, the researchers report in the September Biology Letters. Humans start to laugh around 3 months of age, but early on, “it hasn’t reached its full potential,” Kret says. Both babies’ maturing vocal tracts and their social interactions may influence the development of the sounds, the researchers say.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 27983 - Posted: 09.11.2021

By Jonathan Lambert At least 65 million years of evolution separate humans and greater sac-winged bats, but these two mammals share a key feature of learning how to speak: babbling. Just as human infants babble their way from “da-da-da-da” to “Dad,” wild bat pups (Saccopteryx bilineata) learn the mating and territorial songs of adults by first babbling out the fundamental syllables of the vocalizations, researchers report in the Aug. 20 Science. These bats now join humans as the only clear examples of mammals who learn to make complex vocalizations through babbling. “This is a hugely important step forward in the study of vocal learning,” says Tecumseh Fitch, an evolutionary biologist at the University of Vienna not involved in the new study. “These findings suggest that there are deep parallels between how humans and young bats learn to control their vocal apparatus,” he says. The work could enable future studies that might allow researchers to peer deeper into the brain activity that underpins vocal learning. Before complex vocalizations, whether words or mating songs, can be spoken or sung, vocalizers must learn to articulate the syllables that make up a species’s vocabulary, says Ahana Fernandez, an animal behavior biologist at the Museum für Naturkunde in Berlin. “Babbling is a way of practicing,” and honing those vocalizations, she says. The rhythmic, repetitive “ba-ba-ba’s” and “ga-ga-ga’s” of human infants may sound like gibberish, but they are necessary exploratory steps toward learning how to talk. Seeing whether babbling is required for any animal that learns complex vocalizations necessitates looking in other species. © Society for Science & the Public 2000–2021.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27957 - Posted: 08.21.2021

Lydia Denworth Lee Reeves always wanted to be a veterinarian. When he was in high school in the Washington, D.C., suburbs, he went to an animal hospital near his house on a busy Saturday morning to apply for a job. The receptionist said the doctor was too busy to talk. But Reeves was determined and waited. Three and a half hours later, after all the dogs and cats had been seen, the veterinarian emerged and asked Reeves what he could do for him. Reeves, who has stuttered since he was three years old, had trouble answering. “I somehow struggled out the fact that I wanted the job and he asked me what my name was,” he says. “I couldn’t get my name out to save my life.” The vet finally reached for a piece of paper and had Reeves write down his name and add his phone number, but he said there was no job available. “I remember walking out of that clinic that morning thinking that essentially my life was over,” Reeves says. “Not only was I never going to become a veterinarian, but I couldn’t even get a job cleaning cages.” More than 50 years have passed. Reeves, who is now 72, has gone on to become an effective national advocate for people with speech impairments, but the frustration and embarrassment of that day are still vivid. They are also emblematic of the complicated experience that is stuttering. Technically, stuttering is a disruption in the easy flow of speech, but the physical struggle and the emotional effects that often go with it have led observers to wrongly attribute the condition to defects of the tongue or voice box, problems with cognition, emotional trauma or nervousness, forcing left-handed children to become right-handed, and, most unfortunately, poor parenting. Freudian psychiatrists thought stuttering represented “oral-sadistic conflict,” whereas the behavioralists argued that labeling a child a stutterer would exacerbate the problem. Reeves’s parents were told to call no attention to his stutter—wait it out, and it would go away. © 2021 Scientific American,

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27942 - Posted: 08.11.2021

By Jonathan Lambert When one naked mole-rat encounters another, the accent of their chirps might reveal whether they’re friends or foes. These social rodents are famous for their wrinkly, hairless appearance. But hang around one of their colonies for a while, and you’ll notice something else — they’re a chatty bunch. Their underground burrows resound with near-constant chirps, grunts, squeaks and squeals. Now, computer algorithms have uncovered a hidden order within this cacophony, researchers report in the Jan. 29 Science. These distinctive chirps, which pups learn when they’re young, help the mostly blind, xenophobic rodents discern who belongs, strengthening the bonds that maintain cohesion in these highly cooperative groups. “Language is really important for extreme social behavior, in humans, dolphins, elephants or birds,” says Thomas Park, a biologist at the University of Illinois Chicago who wasn’t involved in the study. This work shows naked mole-rats (Heterocephalus glaber) belong in those ranks as well, Park says. Naked mole-rat groups seem more like ant or termite colonies than mammalian societies. Every colony has a single breeding queen who suppresses the reproduction of tens to hundreds of nonbreeding worker rats that dig elaborate subterranean tunnels in search of tubers in eastern Africa (SN: 10/18/04). Food is scarce, and the rodents vigorously attack intruders from other colonies. While researchers have long noted the rat’s raucous chatter, few actually studied it. “Naked mole-rats are incredibly cooperative and incredibly vocal, and no one has really looked into how these two features influence one another,” says Alison Barker, a neuroscientist at the Max Delbrück Center for Molecular Medicine in Berlin. © Society for Science & the Public 2000–2021.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27673 - Posted: 01.30.2021

By Lisa Sanders, M.D. “You need to call an ambulance,” the familiar voice from her doctor’s office urged the frightened 59-year-old woman. “Or should I do it for you?” No, she replied shakily. I can do it. The woman looked down at the phone in her hand; there were two of them. She closed one eye and the second phone disappeared. Then she dialed 911. It had been a hellish few days. Five days earlier, she noticed that she was having trouble walking. Her legs couldn’t or wouldn’t follow her brain’s instructions. She had to take these ungainly baby steps to get anywhere. Her muscles felt weak; her feet were inert blocks. Her hands shook uncontrollably. She vomited half a dozen times a day. The week before, she decided to stop drinking, and she recognized the shaking and vomiting as part of that process. The trouble walking, that was new. But that’s not why she called her doctor. The previous day, she was driving home and was just a block away when suddenly there were two of everything. Stone-cold sober and seeing double. There were two dotted lines identifying the middle of her quiet neighborhood street in South Portland, Maine. Two sets of curbs in front of two sets of sidewalks. She stopped the car, rubbed her eyes and discovered that the second objects slid back into the first when one eye stayed covered. She drove home with her face crinkled in an awkward wink. At home, she immediately called her doctor’s office. They wanted to send an ambulance right then. But she didn’t have health insurance. She couldn’t afford either the ambulance or the hospital. She would probably be better by the next day, she told the young woman on the phone. But the next day was the same. And when she called the doctor’s office this time, the medical assistant’s suggestion that she call an ambulance made a lot more sense. The woman was embarrassed by the siren and flashing lights. Her neighbors would be worried. But she couldn’t deny the relief she felt as she watched the ambulance pull up. The E.M.T.s helped her to her feet and onto the stretcher, then drove her to nearby Northern Light Mercy Hospital. © 2020 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27607 - Posted: 12.05.2020

Amber Dance Gerald Maguire has stuttered since childhood, but you might not guess it from talking to him. For the past 25 years, he has been treating his disorder with antipsychotic medications not officially approved for the condition. Only with careful attention might you discern his occasional stumble on multisyllabic words like "statistically" and "pharmaceutical." Maguire has plenty of company: More than 70 million people worldwide, including about 3 million Americans, stutter — they have difficulty with the starting and timing of speech, resulting in halting and repetition. That number includes approximately 5 percent of children (many of whom outgrow the condition) and 1 percent of adults. Their numbers include presidential candidate Joe Biden, deep-voiced actor James Earl Jones, and actress Emily Blunt. Though they and many others, including Maguire, have achieved career success, stuttering can contribute to social anxiety and draw ridicule or discrimination. Maguire, a psychiatrist at the University of California, Riverside, has been treating people who stutter, and researching potential treatments, for decades. He's now embarking on a clinical trial of a new medication, ecopipam, that streamlined speech and improved quality of life in a small pilot study in 2019. Others, meanwhile, are delving into the root causes of stuttering. In past decades, therapists mistakenly attributed stuttering to defects of the tongue and voice box, to anxiety, trauma, or even poor parenting — and some still do. Yet others have long suspected that neurological problems might underlie stuttering, says J. Scott Yaruss, a speech-language pathologist at Michigan State University. The first data to back up that hunch came in 1991, when researchers reported altered blood flow in the brains of people who stuttered. Since then research has made it more apparent that stuttering is all in the brain. "We are in the middle of an absolute explosion of knowledge being developed about stuttering," Yaruss says. ® 2020 The Week Publications Inc.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27565 - Posted: 11.04.2020