Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 694

By Cathleen O’Grady Human conversations are rapid-fire affairs, with mere milliseconds passing between one person’s utterance and their partner’s response. This speedy turn taking is universal across cultures—but now it turns out that chimpanzees do it, too. By analyzing thousands of gestures from chimpanzees in five different communities in East Africa, researchers found that the animals take turns while communicating, and do so as quickly as we do. The speedy gestural conversations are also seen across chimp communities, just like in humans, the authors report today in Current Biology. The finding is “very exciting” says Maël Leroux, an evolutionary biologist at the University of Rennes who was not involved with the work. “Language is the hallmark of our species … and a central feature of language is our ability to take turns.” Finding a similar behavior in our closest living relative, he says, suggests we may have inherited this ability from our shared common ancestor. When chimps gesture—such as reaching out an arm in a begging gesture—they are most often making a request, says Gal Badihi, an animal communication researcher at the University of St Andrews. This can include things such as “groom me,” “give me,” or “travel with me.” Most of the time, the chimp’s partner does the requested behavior. But sometimes, the second chimp will respond with its own gestures instead—for instance, one chimp requesting grooming, and the other indicating where they would like to be groomed, essentially saying “groom me first.” To figure out whether these interactions resemble human turn taking, Badihi and colleagues combed through hundreds of hours of footage from a massive database of chimpanzee gestural interactions recorded by multiple researchers across decades of fieldwork in East Africa. The scientists studied the footage, describing the precise movements each chimp made when gesturing, the response of other chimps, the duration of the gestures, and other details. © 2024 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29403 - Posted: 07.23.2024

By Sara Reardon By eavesdropping on the brains of living people, scientists have created the highest-resolution map yet of the neurons that encode the meanings of various words1. The results hint that, across individuals, the brain uses the same standard categories to classify words — helping us to turn sound into sense. The study is based on words only in English. But it’s a step along the way to working out how the brain stores words in its language library, says neurosurgeon Ziv Williams at the Massachusetts Institute of Technology in Cambridge. By mapping the overlapping sets of brain cells that respond to various words, he says, “we can try to start building a thesaurus of meaning”. The brain area called the auditory cortex processes the sound of a word as it enters the ear. But it is the brain’s prefrontal cortex, a region where higher-order brain activity takes place, that works out a word’s ‘semantic meaning’ — its essence or gist. Previous research2 has studied this process by analysing images of blood flow in the brain, which is a proxy for brain activity. This method allowed researchers to map word meaning to small regions of the brain. But Williams and his colleagues found a unique opportunity to look at how individual neurons encode language in real time. His group recruited ten people about to undergo surgery for epilepsy, each of whom had had electrodes implanted in their brains to determine the source of their seizures. The electrodes allowed the researchers to record activity from around 300 neurons in each person’s prefrontal cortex. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29383 - Posted: 07.06.2024

By Carl Zimmer For thousands of years, philosophers have argued about the purpose of language. Plato believed it was essential for thinking. Thought “is a silent inner conversation of the soul with itself,” he wrote. Many modern scholars have advanced similar views. Starting in the 1960s, Noam Chomsky, a linguist at M.I.T., argued that we use language for reasoning and other forms of thought. “If there is a severe deficit of language, there will be severe deficit of thought,” he wrote. As an undergraduate, Evelina Fedorenko took Dr. Chomsky’s class and heard him describe his theory. “I really liked the idea,” she recalled. But she was puzzled by the lack of evidence. “A lot of things he was saying were just stated as if they were facts — the truth,” she said. Dr. Fedorenko went on to become a cognitive neuroscientist at M.I.T., using brain scanning to investigate how the brain produces language. And after 15 years, her research has led her to a startling conclusion: We don’t need language to think. “When you start evaluating it, you just don’t find support for this role of language in thinking,” she said. When Dr. Fedorenko began this work in 2009, studies had found that the same brain regions required for language were also active when people reasoned or carried out arithmetic. But Dr. Fedorenko and other researchers discovered that this overlap was a mirage. Part of the trouble with the early results was that the scanners were relatively crude. Scientists made the most of their fuzzy scans by combining the results from all their volunteers, creating an overall average of brain activity. © 2024 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 29376 - Posted: 07.03.2024

By Amanda Heidt For the first time, a brain implant has helped a bilingual person who is unable to articulate words to communicate in both of his languages. An artificial-intelligence (AI) system coupled to the brain implant decodes, in real time, what the individual is trying to say in either Spanish or English. The findings1, published on 20 May in Nature Biomedical Engineering, provide insights into how our brains process language, and could one day lead to long-lasting devices capable of restoring multilingual speech to people who can’t communicate verbally. “This new study is an important contribution for the emerging field of speech-restoration neuroprostheses,” says Sergey Stavisky, a neuroscientist at the University of California, Davis, who was not involved in the study. Even though the study included only one participant and more work remains to be done, “there’s every reason to think that this strategy will work with higher accuracy in the future when combined with other recent advances”, Stavisky says. The person at the heart of the study, who goes by the nickname Pancho, had a stroke at age 20 that paralysed much of his body. As a result, he can moan and grunt but cannot speak clearly. In his thirties, Pancho partnered with Edward Chang, a neurosurgeon at the University of California, San Francisco, to investigate the stroke’s lasting effects on his brain. In a groundbreaking study published in 20212, Chang’s team surgically implanted electrodes on Pancho’s cortex to record neural activity, which was translated into words on a screen. Pancho’s first sentence — ‘My family is outside’ — was interpreted in English. But Pancho is a native Spanish speaker who learnt English only after his stroke. It’s Spanish that still evokes in him feelings of familiarity and belonging. “What languages someone speaks are actually very linked to their identity,” Chang says. “And so our long-term goal has never been just about replacing words, but about restoring connection for people.” © 2024 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29321 - Posted: 05.23.2024

By Emily Anthes Half a century ago, one of the hottest questions in science was whether humans could teach animals to talk. Scientists tried using sign language to converse with apes and trained parrots to deploy growing English vocabularies. The work quickly attracted media attention — and controversy. The research lacked rigor, critics argued, and what seemed like animal communication could simply have been wishful thinking, with researchers unconsciously cuing their animals to respond in certain ways. In the late 1970s and early 1980s, the research fell out of favor. “The whole field completely disintegrated,” said Irene Pepperberg, a comparative cognition researcher at Boston University, who became known for her work with an African gray parrot named Alex. Today, advances in technology and a growing appreciation for the sophistication of animal minds have renewed interest in finding ways to bridge the species divide. Pet owners are teaching their dogs to press “talking buttons” and zoos are training their apes to use touch screens. In a cautious new paper, a team of scientists outlines a framework for evaluating whether such tools might give animals new ways to express themselves. The research is designed “to rise above some of the things that have been controversial in the past,” said Jennifer Cunha, a visiting research associate at Indiana University. The paper, which is being presented at a science conference on Tuesday, focuses on Ms. Cunha’s parrot, an 11-year-old Goffin’s cockatoo named Ellie. Since 2019, Ms. Cunha has been teaching Ellie to use an interactive “speech board,” a tablet-based app that contains more than 200 illustrated icons, corresponding to words and phrases including “sunflower seeds,” “happy” and “I feel hot.” When Ellie presses on an icon with her tongue, a computerized voice speaks the word or phrase aloud. In the new study, Ms. Cunha and her colleagues did not set out to determine whether Ellie’s use of the speech board amounted to communication. Instead, they used quantitative, computational methods to analyze Ellie’s icon presses to learn more about whether the speech board had what they called “expressive and enrichment potential.” © 2024 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29306 - Posted: 05.14.2024

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. At the lab, the owners were instructed to say words for objects before showing their dog either the correct item or a different one. For example, an owner might say “Look, here’s the ball”, but hold up a Frisbee instead. The experiments were repeated multiple times with matching and non-matching objects. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29214 - Posted: 03.26.2024

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29212 - Posted: 03.23.2024

By Cathleen O’Grady Why do some children learn to talk earlier than others? Linguists have pointed to everything from socioeconomic status to gender to the number of languages their parents speak. But a new study finds a simpler explanation. An analysis of nearly 40,000 hours of audio recordings from children around the world suggests kids speak more when the adults around them are more talkative, which may also give them a larger vocabulary early in life. Factors such as social class appear to make no difference, researchers report this month in the Proceedings of the National Academy of Sciences. The paper is a “wonderful, impactful, and much needed contribution to the literature,” says Ece Demir-Lira, a developmental scientist at the University of Iowa who was not involved in the work. By looking at real-life language samples from six different continents, she says, the study provides a global view of language development sorely lacking from the literature. Most studies on language learning have focused on children in Western, industrialized nations. To build a more representative data set, Harvard University developmental psychologist Elika Bergelson and her collaborators scoured the literature for studies that had used LENA devices: small audio recorders that babies can wear—tucked into a pocket on a specially made vest—for days at a time. These devices function as a kind of “talk pedometer,” with an algorithm that estimates how much its wearer speaks, as well as how much language they hear in their environment—from parents, other adults, and even siblings. The team asked 18 research groups across 12 countries whether they would share their data from the devices, leaving them with a whopping 2865 days of recordings from 1001 children. Many of the kids, who ranged from 2 months to 4 years old, were from English-speaking families, but the data also included speakers of Dutch, Spanish, Vietnamese, and Finnish, as well as Yélî Dnye (Papua New Guinea), Wolof (Senegal), and Tsimané (Bolivia). Combining these smaller data sets gave the researchers a more powerful, diverse sample.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 29061 - Posted: 12.22.2023

By Sonia Shah Can a mouse learn a new song? Such a question might seem whimsical. Though humans have lived alongside mice for at least 15,000 years, few of us have ever heard mice sing, because they do so in frequencies beyond the range detectable by human hearing. As pups, their high-pitched songs alert their mothers to their whereabouts; as adults, they sing in ultrasound to woo one another. For decades, researchers considered mouse songs instinctual, the fixed tunes of a windup music box, rather than the mutable expressions of individual minds. But no one had tested whether that was really true. In 2012, a team of neurobiologists at Duke University, led by Erich Jarvis, a neuroscientist who studies vocal learning, designed an experiment to find out. The team surgically deafened five mice and recorded their songs in a mouse-size sound studio, tricked out with infrared cameras and microphones. They then compared sonograms of the songs of deafened mice with those of hearing mice. If the mouse songs were innate, as long presumed, the surgical alteration would make no difference at all. Jarvis and his researchers slowed down the tempo and shifted the pitch of the recordings, so that they could hear the songs with their own ears. Those of the intact mice sounded “remarkably similar to some bird songs,” Jarvis wrote in a 2013 paper that described the experiment, with whistlelike syllables similar to those in the songs of canaries and the trills of dolphins. Not so the songs of the deafened mice: Deprived of auditory feedback, their songs became degraded, rendering them nearly unrecognizable. They sounded, the scientists noted, like “squawks and screams.” Not only did the tunes of a mouse depend on its ability to hear itself and others, but also, as the team found in another experiment, a male mouse could alter the pitch of its song to compete with other male mice for female attention. Inside these murine skills lay clues to a puzzle many have called “the hardest problem in science”: the origins of language. In humans, “vocal learning” is understood as a skill critical to spoken language. Researchers had already discovered the capacity for vocal learning in species other than humans, including in songbirds, hummingbirds, parrots, cetaceans such as dolphins and whales, pinnipeds such as seals, elephants and bats. But given the centuries-old idea that a deep chasm separated human language from animal communications, most scientists understood the vocal learning abilities of other species as unrelated to our own — as evolutionarily divergent as the wing of a bat is to that of a bee. The apparent absence of intermediate forms of language — say, a talking animal — left the question of how language evolved resistant to empirical inquiry. © 2023 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28921 - Posted: 09.21.2023

By R. Douglas Fields One day, while threading a needle to sew a button, I noticed that my tongue was sticking out. The same thing happened later, as I carefully cut out a photograph. Then another day, as I perched precariously on a ladder painting the window frame of my house, there it was again! What’s going on here? I’m not deliberately protruding my tongue when I do these things, so why does it keep making appearances? After all, it’s not as if that versatile lingual muscle has anything to do with controlling my hands. Right? Yet as I would learn, our tongue and hand movements are intimately interrelated at an unconscious level. This peculiar interaction’s deep evolutionary roots even help explain how our brain can function without conscious effort. A common explanation for why we stick out our tongue when we perform precision hand movements is something called motor overflow. In theory, it can take so much cognitive effort to thread a needle (or perform other demanding fine motor skills) that our brain circuits get swamped and impinge on adjacent circuits, activating them inappropriately. It’s certainly true that motor overflow can happen after neural injury or in early childhood when we are learning to control our bodies. But I have too much respect for our brains to buy that “limited brain bandwidth” explanation. How, then, does this peculiar hand-mouth cross-talk really occur? Tracing the neural anatomy of tongue and hand control to pinpoint where a short circuit might happen, we find first of all that the two are controlled by completely different nerves. This makes sense: A person who suffers a spinal cord injury that paralyzes their hands does not lose their ability to speak. That’s because the tongue is controlled by a cranial nerve, but the hands are controlled by spinal nerves. Simons Foundation

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 28894 - Posted: 08.30.2023

By McKenzie Prillaman When speaking to young kids, humans often use squeaky, high-pitched baby talk. It turns out that some dolphins do, too. Bottlenose dolphin moms modify their individually distinctive whistles when their babies are nearby, researchers report June 26 in the Proceedings of the National Academy of Sciences. This “parentese” might enhance attention, bonding and vocal learning in calves, as it seems to do in humans. During the first few months of life, each common bottlenose dolphin (Tursiops truncatus) develops a unique tune, or signature whistle, akin to a name (SN: 7/22/13). The dolphins shout out their own “names” in the water “likely as a way to keep track of each other,” says marine biologist Laela Sayigh of the Woods Hole But dolphin moms seem to tweak that tune in the presence of their calves, which tend to stick by mom’s side for three to six years. It’s a change that Sayigh first noticed in a 2009 study published by her student. But “it was just one little piece of this much larger study,” she says. To follow up on that observation, Sayigh and colleagues analyzed signature whistles from 19 female dolphins both with and without their babies close by. Audio recordings were captured from a wild population that lives near Sarasota Bay, Fla., during catch-and-release health assessments that occurred from 1984 to 2018. The researchers examined 40 instances of each dolphin’s signature whistle, verified by the unique way each vocalization’s frequencies change over time. Half of each dolphin’s whistles were voiced in the presence of her baby. When youngsters were around, the moms’ whistles contained, on average, a higher maximum and slightly lower minimum pitch compared with those uttered in the absence of calves, contributing to an overall widened pitch range. © Society for Science & the Public 2000–2023.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28835 - Posted: 06.28.2023

By Natalia Mesa Most people will learn one or two languages in their lives. But Vaughn Smith, a 47-year-old carpet cleaner from Washington, D.C., speaks 24. Smith is a hyperpolyglot—a rare individual who speaks more than 10 languages. In a new brain imaging study, researchers peered inside the minds of polyglots like Smith to tease out how language-specific regions in their brains respond to hearing different languages. Familiar languages elicited a stronger reaction than unfamiliar ones, they found, with one important exception: native languages, which provoked relatively little brain activity. This, the authors note, suggests there’s something special about the languages we learn early in life. This study “contributes to our understanding of how our brain learns new things,” says Augusto Buchweitz, a cognitive neuroscientist at the University of Connecticut, Storrs, who was not involved in the work. “The earlier you learn something, the more your brain [adapts] and probably uses less resources.” Scientists have largely ignored what’s going on inside the brains of polyglots—people who speak more than five languages—says Ev Fedorenko, a cognitive neuroscientist at the Massachusetts Institute of Technology who led the new study. “There’s oodles of work on individuals whose language systems are not functioning properly,” she says, but almost none on people with advanced language skills. That’s partly because they account for only 1% of people globally, making it difficult to find enough participants for research. But studying this group can help linguists understand the human “language network,” a set of specialized brain areas located in the left frontal and temporal lobes. These areas help humans with the most basic aspect of understanding language: connecting sounds with meaning, Fedorenko says.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28654 - Posted: 02.04.2023

By Jennifer Szalai “‘R’s’ are hard,” John Hendrickson writes in his new memoir, “Life on Delay: Making Peace With a Stutter,” committing to paper a string of words that would have caused him trouble had he tried to say them out loud. In November 2019, Hendrickson, an editor at The Atlantic, published an article about then-presidential candidate Joe Biden, who talked frequently about “beating” his childhood stutter — a bit of hyperbole that the article finally laid to rest. Biden insisted on his redemptive narrative, even though Hendrickson, who has stuttered since he was 4, could tell when Biden repeated (“I-I-I-I-I”) or blocked (“…”) on certain sounds. The article went viral, putting Hendrickson in the position of being invited to go on television — a “nightmare,” he said on MSNBC at the time, though it did lead to a flood of letters from fellow stutterers, a number of whom he interviewed for this book. “Life on Delay” traces an arc from frustration and isolation to acceptance and community, recounting a lifetime of bullying and well-meaning but ineffectual interventions and what Hendrickson calls “hundreds of awful first impressions.” When he depicts scenes from his childhood it’s often in a real-time present tense, putting us in the room with the boy he was, more than two decades before. Hendrickson also interviews people: experts, therapists, stutterers, his own parents. He calls up his kindergarten teacher, his childhood best friend and the actress Emily Blunt. He reaches out to others who have published personal accounts of stuttering, including The New Yorker’s Nathan Heller and Katharine Preston, the author of a memoir titled “Out With It.” We learn that it’s only been since the turn of the millennium or so that stuttering has been understood as a neurological disorder; that for 75 percent of children who stutter, “the issue won’t follow them to adulthood”; that there’s still disagreement over whether “disfluency” is a matter of language or motor control, because “the research is still a bit of a mess.” © 2023 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 28643 - Posted: 01.27.2023

Holly Else An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. “I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28629 - Posted: 01.14.2023

Xiaofan Lei What comes to mind when you think of someone who stutters? Is that person male or female? Are they weak and nervous, or powerful and heroic? If you have a choice, would you like to marry them, introduce them to your friends or recommend them for a job? Your attitudes toward people who stutter may depend partly on what you think causes stuttering. If you think that stuttering is due to psychological causes, such as being nervous, research suggests that you are more likely to distance yourself from those who stutter and view them more negatively. I am a person who stutters and a doctoral candidate in speech, language and hearing sciences. Growing up, I tried my best to hide my stuttering and to pass as fluent. I avoided sounds and words that I might stutter on. I avoided ordering the dishes I wanted to eat at the school cafeteria to avoid stuttering. I asked my teacher to not call on me in class because I didn’t want to deal with the laughter from my classmates when they heard my stutter. Those experiences motivated me to investigate stuttering so that I can help people who stutter, including myself, to better cope with the condition. Get facts about the coronavirus pandemic and the latest research In writing about what the scientific field has to say about stuttering and its biological causes, I hope I can reduce the stigma and misunderstanding surrounding the disorder. The most recognizable characteristics of developmental stuttering are the repetitions, prolongations and blocks in people’s speech. People who stutter may also experience muscle tension during speech and exhibit secondary behaviors, such as tics and grimaces. © 2010–2023, The Conversation US, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28626 - Posted: 01.12.2023

By Jonathan Moens An artificial intelligence can decode words and sentences from brain activity with surprising — but still limited — accuracy. Using only a few seconds of brain activity data, the AI guesses what a person has heard. It lists the correct answer in its top 10 possibilities up to 73 percent of the time, researchers found in a preliminary study. The AI’s “performance was above what many people thought was possible at this stage,” says Giovanni Di Liberto, a computer scientist at Trinity College Dublin who was not involved in the research. Developed at the parent company of Facebook, Meta, the AI could eventually be used to help thousands of people around the world unable to communicate through speech, typing or gestures, researchers report August 25 at arXiv.org. That includes many patients in minimally conscious, locked-in or “vegetative states” — what’s now generally known as unresponsive wakefulness syndrome (SN: 2/8/19). Most existing technologies to help such patients communicate require risky brain surgeries to implant electrodes. This new approach “could provide a viable path to help patients with communication deficits … without the use of invasive methods,” says neuroscientist Jean-Rémi King, a Meta AI researcher currently at the École Normale Supérieure in Paris. King and his colleagues trained a computational tool to detect words and sentences on 56,000 hours of speech recordings from 53 languages. The tool, also known as a language model, learned how to recognize specific features of language both at a fine-grained level — think letters or syllables — and at a broader level, such as a word or sentence. © Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 28470 - Posted: 09.10.2022

Jason Bruck Bottlenose dolphins’ signature whistles just passed an important test in animal psychology. A new study by my colleagues and me has shown that these animals may use their whistles as namelike concepts. By presenting urine and the sounds of signature whistles to dolphins, my colleagues Vincent Janik, Sam Walmsey and I recently showed that these whistles act as representations of the individuals who own them, similar to human names. For behavioral biologists like us, this is an incredibly exciting result. It is the first time this type of representational naming has been found in any other animal aside from humans. When you hear your friend’s name, you probably picture their face. Likewise, when you smell a friend’s perfume, that can also elicit an image of the friend. This is because humans build mental pictures of each other using more than just one sense. All of the different information from your senses that is associated with a person converges to form a mental representation of that individual - a name with a face, a smell and many other sensory characteristics. Within the first few months of life, dolphins invent their own specific identity calls – called signature whistles. Dolphins often announce their location to or greet other individuals in a pod by sending out their own signature whistles. But researchers have not known if, when a dolphin hears the signature whistle of a dolphin they are familiar with, they actively picture the calling individual. My colleagues and I were interested in determining if dolphin calls are representational in the same way human names invoke many thoughts of an individual. Because dolphins cannot smell, they rely principally on signature whistles to identify each other in the ocean. Dolphins can also copy another dolphin’s whistles as a way to address each other. My previous research showed that dolphins have great memory for each other’s whistles, but scientists argued that a dolphin might hear a whistle, know it sounds familiar, but not remember who the whistle belongs to. My colleagues and I wanted to determine if dolphins could associate signature whistles with the specific owner of that whistle. This would address whether or not dolphins remember and hold representations of other dolphins in their minds. © 2010–2022, The Conversation US, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28441 - Posted: 08.24.2022

By Oliver Whang Read this sentence aloud, if you’re able. As you do, a cascade of motion begins, forcing air from your lungs through two muscles, which vibrate, sculpting sound waves that pass through your mouth and into the world. These muscles are called vocal cords, or vocal folds, and their vibrations form the foundations of the human voice. They also speak to the emergence and evolution of human language. For several years, a team of scientists based mainly in Japan used imaging technology to study the physiology of the throats of 43 species of primates, from baboons and orangutans to macaques and chimpanzees, as well as humans. All the species but one had a similar anatomical structure: an extra set of protruding muscles, called vocal membranes or vocal lips, just above the vocal cords. The exception was Homo sapiens. The researchers also found that the presence of vocal lips destabilized the other primates’ voices, rendering their tone and timbre more chaotic and unpredictable. Animals with vocal lips have a more grating, less controlled baseline of communication, the study found; humans, lacking the extra membranes, can exchange softer, more stable sounds. The findings were published on Thursday in the journal Science. “It’s an interesting little nuance, this change to the human condition,” said Drew Rendall, a biologist at the University of New Brunswick who was not involved in the research. “The addition, if you want to think of it this way, is actually a subtraction.” That many primates have vocal lips has long been known, but their role in communication has not been entirely clear. In 1984, Sugio Hayama, a biologist at Kyoto University, videotaped the inside of a chimpanzee’s throat to study its reflexes under anesthesia. The video also happened to capture a moment when the chimp woke and began hollering, softly at first, then with more power. Decades later, Takeshi Nishimura, a former student of Dr. Hayama and now a biologist at Kyoto University and the principal investigator of the recent research, studied the footage with renewed interest. He found that the chimp’s vocal lips and vocal cords were vibrating together, which added a layer of mechanical complexity to the chimp’s voice that made it difficult to fine-tune. © 2022 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28426 - Posted: 08.11.2022

ByVirginia Morell Babies don’t babble to sound cute—they’re taking their first steps on the path to learning language. Now, a study shows parrot chicks do the same. Although the behavior has been seen in songbirds and two mammalian species, finding it in these birds is important, experts say, as they may provide the best nonhuman model for studying how we begin to learn language. The find is “exciting,” says Irene Pepperberg, a comparative psychologist at Hunter College not involved with the work. Pepperberg herself discovered something like babbling in a famed African gray parrot named Alex, which she studied for more than 30 years. By unearthing the same thing in another parrot species and in the wild, she says, the team has shown this ability is widespread in the birds. In this study, the scientists focused on green-rumped parrotlets (Forpus passerinus)—a smaller species than Alex, found from Venezuela to Brazil. The team investigated a population at Venezuela’s Hato Masaguaral research center, where scientists maintain more than 100 artificial nesting boxes. Like other parrots, songbirds, and humans (and a few other mammal species), parrotlets are vocal learners. They master their calls by listening and mimicking what they hear. The chicks in the new study started to babble at 21 days, according to camcorders installed in a dozen of their nests. They increased the complexity of their sounds dramatically over the next week, the scientists report today in the Proceedings of the Royal Society B. The baby birds uttered strings of soft peeps, clicks, and grrs, but they weren’t communicating with their siblings or parents, says lead author Rory Eggleston, a Ph.D. student at Utah State University. Rather, like a human infant babbling quietly in their crib, a parrotlet chick made the sounds alone (see video). Indeed, most chicks started their babbling bouts when their siblings were asleep, often doing so without even opening their beaks, says Eggleston, who spent hours analyzing videos of the birds. © 2022 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28343 - Posted: 06.01.2022

By Laura Sanders Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not. That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says. But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active. Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says. In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14. Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 15: Language and Lateralization
Link ID: 28307 - Posted: 04.30.2022