Links for Keyword: Language
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Jennifer Szalai “‘R’s’ are hard,” John Hendrickson writes in his new memoir, “Life on Delay: Making Peace With a Stutter,” committing to paper a string of words that would have caused him trouble had he tried to say them out loud. In November 2019, Hendrickson, an editor at The Atlantic, published an article about then-presidential candidate Joe Biden, who talked frequently about “beating” his childhood stutter — a bit of hyperbole that the article finally laid to rest. Biden insisted on his redemptive narrative, even though Hendrickson, who has stuttered since he was 4, could tell when Biden repeated (“I-I-I-I-I”) or blocked (“…”) on certain sounds. The article went viral, putting Hendrickson in the position of being invited to go on television — a “nightmare,” he said on MSNBC at the time, though it did lead to a flood of letters from fellow stutterers, a number of whom he interviewed for this book. “Life on Delay” traces an arc from frustration and isolation to acceptance and community, recounting a lifetime of bullying and well-meaning but ineffectual interventions and what Hendrickson calls “hundreds of awful first impressions.” When he depicts scenes from his childhood it’s often in a real-time present tense, putting us in the room with the boy he was, more than two decades before. Hendrickson also interviews people: experts, therapists, stutterers, his own parents. He calls up his kindergarten teacher, his childhood best friend and the actress Emily Blunt. He reaches out to others who have published personal accounts of stuttering, including The New Yorker’s Nathan Heller and Katharine Preston, the author of a memoir titled “Out With It.” We learn that it’s only been since the turn of the millennium or so that stuttering has been understood as a neurological disorder; that for 75 percent of children who stutter, “the issue won’t follow them to adulthood”; that there’s still disagreement over whether “disfluency” is a matter of language or motor control, because “the research is still a bit of a mess.” © 2023 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 28643 - Posted: 01.27.2023
Holly Else An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. “I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts. © 2023 Springer Nature Limited
Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28629 - Posted: 01.14.2023
Xiaofan Lei What comes to mind when you think of someone who stutters? Is that person male or female? Are they weak and nervous, or powerful and heroic? If you have a choice, would you like to marry them, introduce them to your friends or recommend them for a job? Your attitudes toward people who stutter may depend partly on what you think causes stuttering. If you think that stuttering is due to psychological causes, such as being nervous, research suggests that you are more likely to distance yourself from those who stutter and view them more negatively. I am a person who stutters and a doctoral candidate in speech, language and hearing sciences. Growing up, I tried my best to hide my stuttering and to pass as fluent. I avoided sounds and words that I might stutter on. I avoided ordering the dishes I wanted to eat at the school cafeteria to avoid stuttering. I asked my teacher to not call on me in class because I didn’t want to deal with the laughter from my classmates when they heard my stutter. Those experiences motivated me to investigate stuttering so that I can help people who stutter, including myself, to better cope with the condition. Get facts about the coronavirus pandemic and the latest research In writing about what the scientific field has to say about stuttering and its biological causes, I hope I can reduce the stigma and misunderstanding surrounding the disorder. The most recognizable characteristics of developmental stuttering are the repetitions, prolongations and blocks in people’s speech. People who stutter may also experience muscle tension during speech and exhibit secondary behaviors, such as tics and grimaces. © 2010–2023, The Conversation US, Inc.
Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28626 - Posted: 01.12.2023
By Jonathan Moens An artificial intelligence can decode words and sentences from brain activity with surprising — but still limited — accuracy. Using only a few seconds of brain activity data, the AI guesses what a person has heard. It lists the correct answer in its top 10 possibilities up to 73 percent of the time, researchers found in a preliminary study. The AI’s “performance was above what many people thought was possible at this stage,” says Giovanni Di Liberto, a computer scientist at Trinity College Dublin who was not involved in the research. Developed at the parent company of Facebook, Meta, the AI could eventually be used to help thousands of people around the world unable to communicate through speech, typing or gestures, researchers report August 25 at arXiv.org. That includes many patients in minimally conscious, locked-in or “vegetative states” — what’s now generally known as unresponsive wakefulness syndrome (SN: 2/8/19). Most existing technologies to help such patients communicate require risky brain surgeries to implant electrodes. This new approach “could provide a viable path to help patients with communication deficits … without the use of invasive methods,” says neuroscientist Jean-Rémi King, a Meta AI researcher currently at the École Normale Supérieure in Paris. King and his colleagues trained a computational tool to detect words and sentences on 56,000 hours of speech recordings from 53 languages. The tool, also known as a language model, learned how to recognize specific features of language both at a fine-grained level — think letters or syllables — and at a broader level, such as a word or sentence. © Society for Science & the Public 2000–2022.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 28470 - Posted: 09.10.2022
Jason Bruck Bottlenose dolphins’ signature whistles just passed an important test in animal psychology. A new study by my colleagues and me has shown that these animals may use their whistles as namelike concepts. By presenting urine and the sounds of signature whistles to dolphins, my colleagues Vincent Janik, Sam Walmsey and I recently showed that these whistles act as representations of the individuals who own them, similar to human names. For behavioral biologists like us, this is an incredibly exciting result. It is the first time this type of representational naming has been found in any other animal aside from humans. When you hear your friend’s name, you probably picture their face. Likewise, when you smell a friend’s perfume, that can also elicit an image of the friend. This is because humans build mental pictures of each other using more than just one sense. All of the different information from your senses that is associated with a person converges to form a mental representation of that individual - a name with a face, a smell and many other sensory characteristics. Within the first few months of life, dolphins invent their own specific identity calls – called signature whistles. Dolphins often announce their location to or greet other individuals in a pod by sending out their own signature whistles. But researchers have not known if, when a dolphin hears the signature whistle of a dolphin they are familiar with, they actively picture the calling individual. My colleagues and I were interested in determining if dolphin calls are representational in the same way human names invoke many thoughts of an individual. Because dolphins cannot smell, they rely principally on signature whistles to identify each other in the ocean. Dolphins can also copy another dolphin’s whistles as a way to address each other. My previous research showed that dolphins have great memory for each other’s whistles, but scientists argued that a dolphin might hear a whistle, know it sounds familiar, but not remember who the whistle belongs to. My colleagues and I wanted to determine if dolphins could associate signature whistles with the specific owner of that whistle. This would address whether or not dolphins remember and hold representations of other dolphins in their minds. © 2010–2022, The Conversation US, Inc.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28441 - Posted: 08.24.2022
By Oliver Whang Read this sentence aloud, if you’re able. As you do, a cascade of motion begins, forcing air from your lungs through two muscles, which vibrate, sculpting sound waves that pass through your mouth and into the world. These muscles are called vocal cords, or vocal folds, and their vibrations form the foundations of the human voice. They also speak to the emergence and evolution of human language. For several years, a team of scientists based mainly in Japan used imaging technology to study the physiology of the throats of 43 species of primates, from baboons and orangutans to macaques and chimpanzees, as well as humans. All the species but one had a similar anatomical structure: an extra set of protruding muscles, called vocal membranes or vocal lips, just above the vocal cords. The exception was Homo sapiens. The researchers also found that the presence of vocal lips destabilized the other primates’ voices, rendering their tone and timbre more chaotic and unpredictable. Animals with vocal lips have a more grating, less controlled baseline of communication, the study found; humans, lacking the extra membranes, can exchange softer, more stable sounds. The findings were published on Thursday in the journal Science. “It’s an interesting little nuance, this change to the human condition,” said Drew Rendall, a biologist at the University of New Brunswick who was not involved in the research. “The addition, if you want to think of it this way, is actually a subtraction.” That many primates have vocal lips has long been known, but their role in communication has not been entirely clear. In 1984, Sugio Hayama, a biologist at Kyoto University, videotaped the inside of a chimpanzee’s throat to study its reflexes under anesthesia. The video also happened to capture a moment when the chimp woke and began hollering, softly at first, then with more power. Decades later, Takeshi Nishimura, a former student of Dr. Hayama and now a biologist at Kyoto University and the principal investigator of the recent research, studied the footage with renewed interest. He found that the chimp’s vocal lips and vocal cords were vibrating together, which added a layer of mechanical complexity to the chimp’s voice that made it difficult to fine-tune. © 2022 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28426 - Posted: 08.11.2022
ByVirginia Morell Babies don’t babble to sound cute—they’re taking their first steps on the path to learning language. Now, a study shows parrot chicks do the same. Although the behavior has been seen in songbirds and two mammalian species, finding it in these birds is important, experts say, as they may provide the best nonhuman model for studying how we begin to learn language. The find is “exciting,” says Irene Pepperberg, a comparative psychologist at Hunter College not involved with the work. Pepperberg herself discovered something like babbling in a famed African gray parrot named Alex, which she studied for more than 30 years. By unearthing the same thing in another parrot species and in the wild, she says, the team has shown this ability is widespread in the birds. In this study, the scientists focused on green-rumped parrotlets (Forpus passerinus)—a smaller species than Alex, found from Venezuela to Brazil. The team investigated a population at Venezuela’s Hato Masaguaral research center, where scientists maintain more than 100 artificial nesting boxes. Like other parrots, songbirds, and humans (and a few other mammal species), parrotlets are vocal learners. They master their calls by listening and mimicking what they hear. The chicks in the new study started to babble at 21 days, according to camcorders installed in a dozen of their nests. They increased the complexity of their sounds dramatically over the next week, the scientists report today in the Proceedings of the Royal Society B. The baby birds uttered strings of soft peeps, clicks, and grrs, but they weren’t communicating with their siblings or parents, says lead author Rory Eggleston, a Ph.D. student at Utah State University. Rather, like a human infant babbling quietly in their crib, a parrotlet chick made the sounds alone (see video). Indeed, most chicks started their babbling bouts when their siblings were asleep, often doing so without even opening their beaks, says Eggleston, who spent hours analyzing videos of the birds. © 2022 American Association for the Advancement of Science.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28343 - Posted: 06.01.2022
By Laura Sanders Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not. That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says. But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active. Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says. In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14. Society for Science & the Public 2000–2022.
Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 15: Language and Lateralization
Link ID: 28307 - Posted: 04.30.2022
By Katharine Q. Seelye Ursula Bellugi, a pioneer in the study of the biological foundations of language who was among the first to demonstrate that sign language was just as complex, abstract and systematic as spoken language, died on Sunday in San Diego. She was 91. Her death, at an assisted living facility, was confirmed by her son Rob Klima. Dr. Bellugi was a leading researcher at the Salk Institute for Biological Studies in San Diego for nearly five decades and, for much of that time, was director of its laboratory for cognitive neuroscience. She made significant contributions in three main areas: the development of language in children; the linguistic structure and neurological basis of American Sign Language; and the social behavior and language abilities of people with a rare genetic disorder, Williams syndrome. “She leaves an indelible legacy of shedding light on how humans communicate and socialize with each other,” Rusty Gage, president of the Salk Institute, said in a statement. Dr. Bellugi’s work, much of it done in collaboration with her husband, Edward S. Klima, advanced understanding of the brain and the origins of language, both signed and spoken. American Sign Language was first described as a true language in 1960 by William C. Stokoe Jr., a professor at Gallaudet University, the world’s only liberal arts university devoted to deaf people. But he was ridiculed and attacked for that claim. Dr. Bellugi and Dr. Klima, who died in 2008, demonstrated conclusively that the world’s signed languages — of which there are more than 100 — were actual languages in their own right, not just translations of spoken languages. Dr. Bellugi, who focused on American Sign Language, established that these linguistic systems were passed down, in all their complexity, from one generation of deaf people to the next. For that reason, the scientific community regards her as the founder of the neurobiology of American Sign Language. The couple’s work led to a major discovery at the Salk lab: that the left hemisphere of the brain has an innate predisposition for language, whether spoken or signed. That finding gave scientists fresh insight into how the brain learns, interprets and forgets language. © 2022 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28296 - Posted: 04.23.2022
ByTess Joosse My dog Leo clearly knows the difference between my voice and the barks of the beagle next door. When I speak, he looks at me with love; when our canine neighbor makes his mind known, Leo barks back with disdain. A new study backs up what I and my fellow dog owners have long suspected: Dogs’ brains process human and canine vocalizations differently, suggesting they evolved to recognize our voices from their own. “The fact that dogs use auditory information alone to distinguish between human and dog sound is significant,” says Jeffrey Katz, a cognitive neuroscientist at Auburn University who is not involved with the work. Previous research has found that dogs can match human voices with expressions. When played an audio clip of a lady laughing, for example, they’ll often look at a photo of a smiling woman. But how exactly the canine brain processes sounds isn’t clear. MRI has shown certain regions of the dog brain are more active when a pup hears another dog whine or bark. But those images can’t reveal exactly when neurons in the brain are firing, and whether they fire differently in response to different noises. So in the new study, Anna Bálint, a canine neuroscientist at Eötvös Loránd University, turned to an electroencephalogram, which can measure individual brain waves. She and her colleagues recruited 17 family dogs, including several border collies, golden retrievers, and a German shepherd, that were previously taught to lie still for several minutes at a time. The scientists attached electrodes to each dog’s head to record its brain response—not an easy task, it turns out. Unlike humans’ bony noggins, dog heads have lots of muscles that can obstruct a clear readout, Bálint says. © 2022 American Association for the Advancement of Science.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28270 - Posted: 04.06.2022
By Jessica Contrera The carpet cleaner heaves his machine up the stairs, untangles its hoses and promises to dump the dirty water only in the approved toilet. Another day scrubbing rugs for less than $20 an hour. Another Washington area house with overflowing bookshelves and walls covered in travel mementos from places he would love to go one day. But this was not that day. “Tell me about this stain,” 46-year-old Vaughn Smith asks his clients. “Well,” says one of the homeowners, “Schroeder rubbed his bottom across it.” Vaughn knows just what to do about that, and the couple, Courtney Stamm and Kelly Widelska, know they can trust him to do it. They’d been hiring him for years, once watching him erase even a splattered Pepto Bismol stain. But this time when Vaughn called to confirm their January appointment, he quietly explained that there was something about himself that he’d never told them. That he rarely told anyone. And well, a reporter was writing a story about it. Could he please bring her along? Now as they listen to Vaughn discuss the porousness of wool, and the difference between Scotchgard and sanitizer, they can’t help but look at him differently. Once the stool stain is solved, Kelly just has to ask. “So, how many languages do you speak?” “Oh goodness,” Vaughn says. “Eight, fluently.” “Eight?” Kelly marvels. “Eight,” Vaughn confirms. English, Spanish, Bulgarian, Czech, Portuguese, Romanian, Russian and Slovak. “But if you go by like, different grades of how much conversation,” he explains, “I know about 25 more.” Vaughn glances at me. He is still underselling his abilities. By his count, it is actually 37 more languages, with at least 24 he speaks well enough to carry on lengthy conversations. He can read and write in eight alphabets and scripts. He can tell stories in Italian and Finnish and American Sign Language. He’s teaching himself Indigenous languages, from Mexico’s Nahuatl © 1996-2022 The Washington Post
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 28269 - Posted: 04.06.2022
Jonathan Franklin Legendary actor Bruce Willis announced Wednesday his departure from the big screen following his diagnosis with aphasia, which is "impacting his cognitive abilities," his family said in a statement. While details of what led to Willis' aphasia diagnosis are unknown at this time, medical experts stress the importance of the brain condition and how its specifically treated — depending on its severity. "[At some point], people will know somebody who's had a stroke and has aphasia," Dr. Swathi Kiran, professor of neurorehabilitation at Boston University, told NPR. Bruce Willis stepping away from acting for health reasons, his family says Aphasia is defined as a condition that affects the ability to speak, write and understand language, according to the Mayo Clinic. The brain disorder can occur after strokes or head injuries — and can even lead in some cases to dementia. "As a result of this and with much consideration Bruce is stepping away from the career that has meant so much to him," his daughter, Rumer Willis, said on Instagram. "This is a really challenging time for our family and we are so appreciative of your continued love, compassion and support." Medical experts say the impacts of aphasia can vary, depending on the person's diagnosis. But mainly, the condition affects a person's ability to communicate — whether it's written, spoken or both. People living with aphasia can experience changes in their ability to communicate; as they may find difficulty finding words, using words out of order or will even speak in a short manner, according to the American Speech-Language-Hearing Association. © 2022 npr
Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28264 - Posted: 04.02.2022
By Bruce Bower Human language, in its many current forms, may owe an evolutionary debt to our distant ape ancestors who sounded off in groups of scattered individuals. Wild orangutans’ social worlds mold how they communicate vocally, much as local communities shape the way people speak, researchers report March 21 in Nature Ecology & Evolution. This finding suggests that social forces began engineering an expanding inventory of communication sounds among ancient ancestors of apes and humans, laying a foundation for the evolution of language, say evolutionary psychologist Adriano Lameira, of the University of Warwick in England, and his colleagues. Lameira’s group recorded predator-warning calls known as “kiss-squeaks” — which typically involve drawing in breath through pursed lips — of 76 orangutans from six populations living on the islands of Borneo and Sumatra, where they face survival threats (SN: 2/15/18). The team tracked the animals and estimated their population densities from 2005 through 2010, with at least five consecutive months of observations and recordings in each population. Analyses of recordings then revealed how much individuals’ kiss-squeaks changed or remained the same over time. Orangutans in high-density populations, which up the odds of frequent social encounters, concoct many variations of kiss-squeaks, the researchers report. Novel reworkings of kiss-squeaks usually get modified further by other orangutans or drop out of use in crowded settings, they say. © Society for Science & the Public 2000–2022.
Related chapters from BN: Chapter 6: Evolution of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28258 - Posted: 03.30.2022
Alejandra Marquez Janse & Christopher Intagliata Imagine you're moving to a new country on the other side of the world. Besides the geographical and cultural changes, you will find a key difference will be the language. But will your pets notice the difference? It was a question that nagged at Laura Cuaya, a brain researcher at the Neuroethology of Communication Lab at at Eötvös Loránd University in Budapest. "When I moved from Mexico to Hungary to start my post-doc research, all was new for me. Obviously, here, people in Budapest speak Hungarian. So you've had a different language, completely different for me," she said. The language was also new to her two dogs: Kun Kun and Odín. "People are super friendly with their dogs [in Budapest]. And my dogs, they are interested in interacting with people," Cuaya said. "But I wonder, did they also notice people here ... spoke a different language?" Cuaya set out to find the answer. She and her colleagues designed an experiment with 18 volunteer dogs — including her two border collies — to see if they could differentiate between two languages. Kun Kun and Odín were used to hearing Spanish; the other dogs Hungarian. The dogs sat still within an MRI machine, while listening to an excerpt from the story The Little Prince. They heard one version in Spanish, and another in Hungarian. Then the scientists analyzed the dogs' brain activity. © 2022 npr
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28145 - Posted: 01.08.2022
Jon Hamilton When baby mice cry, they do it to a beat that is synchronized to the rise and fall of their own breath. It's a pattern that researchers say could help explain why human infants can cry at birth — and how they learn to speak. Mice are born with a cluster of cells in the brainstem that appears to coordinate the rhythms of breathing and vocalizations, a team reports in the journal Neuron. If similar cells exist in human newborns, they could serve as an important building block for speech: the ability to produce one or many syllables between each breath. The cells also could explain why so many human languages are spoken at roughly the same tempo. "This suggests that there is a hardwired network of neurons that is fundamental to speech," says Dr. Kevin Yackle, the study's senior author and a researcher at the University of California, San Francisco. Scientists who study human speech have spent decades debating how much of our ability is innate and how much is learned. The research adds to the evidence that human speech relies — at least in part — on biological "building blocks" that are present from birth, says David Poeppel, a professor of psychology and neural science at New York University who was not involved in the study. But "there is just a big difference between a mouse brain and a human brain," Poeppel says. So the human version of this building block may not look the same. © 2022 npr
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 8: Hormones and Sex
Link ID: 28144 - Posted: 01.08.2022
Daisy Yuhas Billions of people worldwide speak two or more languages. (Though the estimates vary, many sources assert that more than half of the planet is bilingual or multilingual.) One of the most common experiences for these individuals is a phenomenon that experts call “code switching,” or shifting from one language to another within a single conversation or even a sentence. This month Sarah Frances Phillips, a linguist and graduate student at New York University, and her adviser Liina Pylkkänen published findings from brain imaging that underscore the ease with which these switches happen and reveal how the neurological patterns that support this behavior are very similar in monolingual people. The new study reveals how code switching—which some multilingual speakers worry is “cheating,” in contrast to sticking to just one language—is normal and natural. Phillips spoke with Mind Matters editor Daisy Yuhas about these findings and why some scientists believe bilingual speakers may have certain cognitive advantages. Can you tell me a little bit about what drew you to this topic? I grew up in a bilingual household. My mother is from South Korea; my dad is African-American. So I grew up code switching a lot between Korean and English, as well as different varieties of English, such as African-American English and the more mainstream, standardized version. When you spend a lot of time code switching, and then you realize that this is something that is not well understood from a linguistic perspective, nor from a neurobiological perspective, you realize, “Oh, this is open territory.” © 2021 Scientific American
Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28095 - Posted: 12.01.2021
Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 28058 - Posted: 10.30.2021
Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds. Simons Foundation All Rights Reserved © 2021
Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28054 - Posted: 10.27.2021
By Sam Roberts Washoe was 10 months old when her foster parents began teaching her to talk, and five months later they were already trumpeting her success. Not only had she learned words; she could also string them together, creating expressions like “water birds” when she saw a pair of swans and “open flower” to gain admittance to a garden. Washoe was a chimpanzee. She had been born in West Africa, probably orphaned when her mother was killed, sold to a dealer, flown to the United States for use of testing by the Air Force and adopted by R. Allen Gardner and his wife, Beatrix. She was raised as if she were a human child. She craved oatmeal with onions and pumpkin pudding. “The object of our research was to learn how much chimps are like humans,” Professor Gardner told Nevada Today, a University of Nevada publication, in 2007. “To measure this accurately, chimps would be needed to be raised as human children, and to do that, we needed to share a common language.” Washoe ultimately learned some 200 words, becoming what researchers said was the first nonhuman to communicate using sign language developed for the deaf. Professor Gardner, an ethologist who, with his wife, raised the chimpanzee for nearly five years, died on Aug. 20 at his ranch near Reno, Nev. He was 91. His death was announced by the University of Nevada, Reno, where he had joined the faculty in 1963 and conducted his research until he retired in 2010. When scientific journals reported in 1967 that Washoe (pronounced WA-sho), named after a county in Nevada, had learned to recognize and use multiple gestures and expressions in sign language, the news electrified the world of psychologists and ethologists who study animal behavior. © 2021 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28013 - Posted: 10.02.2021
Christie Wilcox If it walks like a duck and talks like a person, it’s probably a musk duck (Biziura lobata)—the only waterfowl species known that can learn sounds from other species. The Australian species’ facility for vocal learning had been mentioned anecdotally in the ornithological literature; now, a paper published September 6 in Philosophical Transactions of the Royal Society B reviews and discusses the evidence, which includes 34-year-old recordings made of a human-reared musk duck named Ripper engaging in an aggressive display while quacking “you bloody fool.” Ripper quacking "you bloody fool" while being provoked by a person separated from him by a fence The Scientist spoke with the lead author on the paper, Leiden University animal behavior researcher Carel ten Cate, to learn more about these unique ducks and what their unexpected ability reveals about the evolution of vocal learning. The Scientist: What is vocal learning? Carel ten Cate: Vocal learning, as it is used in this case, is that animals and humans, they learn their sounds from experience. So they learn from what they hear around them, which will usually be the parents, but it can also be other individuals. And if they don’t get that sort of exposure, then they will be unable to produce species-specific vocalizations, or in the human case, speech sounds and proper spoken language. © 1986–2021 The Scientist.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27987 - Posted: 09.13.2021