Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 689

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. At the lab, the owners were instructed to say words for objects before showing their dog either the correct item or a different one. For example, an owner might say “Look, here’s the ball”, but hold up a Frisbee instead. The experiments were repeated multiple times with matching and non-matching objects. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29214 - Posted: 03.26.2024

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29212 - Posted: 03.23.2024

By Cathleen O’Grady Why do some children learn to talk earlier than others? Linguists have pointed to everything from socioeconomic status to gender to the number of languages their parents speak. But a new study finds a simpler explanation. An analysis of nearly 40,000 hours of audio recordings from children around the world suggests kids speak more when the adults around them are more talkative, which may also give them a larger vocabulary early in life. Factors such as social class appear to make no difference, researchers report this month in the Proceedings of the National Academy of Sciences. The paper is a “wonderful, impactful, and much needed contribution to the literature,” says Ece Demir-Lira, a developmental scientist at the University of Iowa who was not involved in the work. By looking at real-life language samples from six different continents, she says, the study provides a global view of language development sorely lacking from the literature. Most studies on language learning have focused on children in Western, industrialized nations. To build a more representative data set, Harvard University developmental psychologist Elika Bergelson and her collaborators scoured the literature for studies that had used LENA devices: small audio recorders that babies can wear—tucked into a pocket on a specially made vest—for days at a time. These devices function as a kind of “talk pedometer,” with an algorithm that estimates how much its wearer speaks, as well as how much language they hear in their environment—from parents, other adults, and even siblings. The team asked 18 research groups across 12 countries whether they would share their data from the devices, leaving them with a whopping 2865 days of recordings from 1001 children. Many of the kids, who ranged from 2 months to 4 years old, were from English-speaking families, but the data also included speakers of Dutch, Spanish, Vietnamese, and Finnish, as well as Yélî Dnye (Papua New Guinea), Wolof (Senegal), and Tsimané (Bolivia). Combining these smaller data sets gave the researchers a more powerful, diverse sample.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 29061 - Posted: 12.22.2023

By Sonia Shah Can a mouse learn a new song? Such a question might seem whimsical. Though humans have lived alongside mice for at least 15,000 years, few of us have ever heard mice sing, because they do so in frequencies beyond the range detectable by human hearing. As pups, their high-pitched songs alert their mothers to their whereabouts; as adults, they sing in ultrasound to woo one another. For decades, researchers considered mouse songs instinctual, the fixed tunes of a windup music box, rather than the mutable expressions of individual minds. But no one had tested whether that was really true. In 2012, a team of neurobiologists at Duke University, led by Erich Jarvis, a neuroscientist who studies vocal learning, designed an experiment to find out. The team surgically deafened five mice and recorded their songs in a mouse-size sound studio, tricked out with infrared cameras and microphones. They then compared sonograms of the songs of deafened mice with those of hearing mice. If the mouse songs were innate, as long presumed, the surgical alteration would make no difference at all. Jarvis and his researchers slowed down the tempo and shifted the pitch of the recordings, so that they could hear the songs with their own ears. Those of the intact mice sounded “remarkably similar to some bird songs,” Jarvis wrote in a 2013 paper that described the experiment, with whistlelike syllables similar to those in the songs of canaries and the trills of dolphins. Not so the songs of the deafened mice: Deprived of auditory feedback, their songs became degraded, rendering them nearly unrecognizable. They sounded, the scientists noted, like “squawks and screams.” Not only did the tunes of a mouse depend on its ability to hear itself and others, but also, as the team found in another experiment, a male mouse could alter the pitch of its song to compete with other male mice for female attention. Inside these murine skills lay clues to a puzzle many have called “the hardest problem in science”: the origins of language. In humans, “vocal learning” is understood as a skill critical to spoken language. Researchers had already discovered the capacity for vocal learning in species other than humans, including in songbirds, hummingbirds, parrots, cetaceans such as dolphins and whales, pinnipeds such as seals, elephants and bats. But given the centuries-old idea that a deep chasm separated human language from animal communications, most scientists understood the vocal learning abilities of other species as unrelated to our own — as evolutionarily divergent as the wing of a bat is to that of a bee. The apparent absence of intermediate forms of language — say, a talking animal — left the question of how language evolved resistant to empirical inquiry. © 2023 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28921 - Posted: 09.21.2023

By R. Douglas Fields One day, while threading a needle to sew a button, I noticed that my tongue was sticking out. The same thing happened later, as I carefully cut out a photograph. Then another day, as I perched precariously on a ladder painting the window frame of my house, there it was again! What’s going on here? I’m not deliberately protruding my tongue when I do these things, so why does it keep making appearances? After all, it’s not as if that versatile lingual muscle has anything to do with controlling my hands. Right? Yet as I would learn, our tongue and hand movements are intimately interrelated at an unconscious level. This peculiar interaction’s deep evolutionary roots even help explain how our brain can function without conscious effort. A common explanation for why we stick out our tongue when we perform precision hand movements is something called motor overflow. In theory, it can take so much cognitive effort to thread a needle (or perform other demanding fine motor skills) that our brain circuits get swamped and impinge on adjacent circuits, activating them inappropriately. It’s certainly true that motor overflow can happen after neural injury or in early childhood when we are learning to control our bodies. But I have too much respect for our brains to buy that “limited brain bandwidth” explanation. How, then, does this peculiar hand-mouth cross-talk really occur? Tracing the neural anatomy of tongue and hand control to pinpoint where a short circuit might happen, we find first of all that the two are controlled by completely different nerves. This makes sense: A person who suffers a spinal cord injury that paralyzes their hands does not lose their ability to speak. That’s because the tongue is controlled by a cranial nerve, but the hands are controlled by spinal nerves. Simons Foundation

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 28894 - Posted: 08.30.2023

By McKenzie Prillaman When speaking to young kids, humans often use squeaky, high-pitched baby talk. It turns out that some dolphins do, too. Bottlenose dolphin moms modify their individually distinctive whistles when their babies are nearby, researchers report June 26 in the Proceedings of the National Academy of Sciences. This “parentese” might enhance attention, bonding and vocal learning in calves, as it seems to do in humans. During the first few months of life, each common bottlenose dolphin (Tursiops truncatus) develops a unique tune, or signature whistle, akin to a name (SN: 7/22/13). The dolphins shout out their own “names” in the water “likely as a way to keep track of each other,” says marine biologist Laela Sayigh of the Woods Hole But dolphin moms seem to tweak that tune in the presence of their calves, which tend to stick by mom’s side for three to six years. It’s a change that Sayigh first noticed in a 2009 study published by her student. But “it was just one little piece of this much larger study,” she says. To follow up on that observation, Sayigh and colleagues analyzed signature whistles from 19 female dolphins both with and without their babies close by. Audio recordings were captured from a wild population that lives near Sarasota Bay, Fla., during catch-and-release health assessments that occurred from 1984 to 2018. The researchers examined 40 instances of each dolphin’s signature whistle, verified by the unique way each vocalization’s frequencies change over time. Half of each dolphin’s whistles were voiced in the presence of her baby. When youngsters were around, the moms’ whistles contained, on average, a higher maximum and slightly lower minimum pitch compared with those uttered in the absence of calves, contributing to an overall widened pitch range. © Society for Science & the Public 2000–2023.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28835 - Posted: 06.28.2023

By Natalia Mesa Most people will learn one or two languages in their lives. But Vaughn Smith, a 47-year-old carpet cleaner from Washington, D.C., speaks 24. Smith is a hyperpolyglot—a rare individual who speaks more than 10 languages. In a new brain imaging study, researchers peered inside the minds of polyglots like Smith to tease out how language-specific regions in their brains respond to hearing different languages. Familiar languages elicited a stronger reaction than unfamiliar ones, they found, with one important exception: native languages, which provoked relatively little brain activity. This, the authors note, suggests there’s something special about the languages we learn early in life. This study “contributes to our understanding of how our brain learns new things,” says Augusto Buchweitz, a cognitive neuroscientist at the University of Connecticut, Storrs, who was not involved in the work. “The earlier you learn something, the more your brain [adapts] and probably uses less resources.” Scientists have largely ignored what’s going on inside the brains of polyglots—people who speak more than five languages—says Ev Fedorenko, a cognitive neuroscientist at the Massachusetts Institute of Technology who led the new study. “There’s oodles of work on individuals whose language systems are not functioning properly,” she says, but almost none on people with advanced language skills. That’s partly because they account for only 1% of people globally, making it difficult to find enough participants for research. But studying this group can help linguists understand the human “language network,” a set of specialized brain areas located in the left frontal and temporal lobes. These areas help humans with the most basic aspect of understanding language: connecting sounds with meaning, Fedorenko says.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28654 - Posted: 02.04.2023

By Jennifer Szalai “‘R’s’ are hard,” John Hendrickson writes in his new memoir, “Life on Delay: Making Peace With a Stutter,” committing to paper a string of words that would have caused him trouble had he tried to say them out loud. In November 2019, Hendrickson, an editor at The Atlantic, published an article about then-presidential candidate Joe Biden, who talked frequently about “beating” his childhood stutter — a bit of hyperbole that the article finally laid to rest. Biden insisted on his redemptive narrative, even though Hendrickson, who has stuttered since he was 4, could tell when Biden repeated (“I-I-I-I-I”) or blocked (“…”) on certain sounds. The article went viral, putting Hendrickson in the position of being invited to go on television — a “nightmare,” he said on MSNBC at the time, though it did lead to a flood of letters from fellow stutterers, a number of whom he interviewed for this book. “Life on Delay” traces an arc from frustration and isolation to acceptance and community, recounting a lifetime of bullying and well-meaning but ineffectual interventions and what Hendrickson calls “hundreds of awful first impressions.” When he depicts scenes from his childhood it’s often in a real-time present tense, putting us in the room with the boy he was, more than two decades before. Hendrickson also interviews people: experts, therapists, stutterers, his own parents. He calls up his kindergarten teacher, his childhood best friend and the actress Emily Blunt. He reaches out to others who have published personal accounts of stuttering, including The New Yorker’s Nathan Heller and Katharine Preston, the author of a memoir titled “Out With It.” We learn that it’s only been since the turn of the millennium or so that stuttering has been understood as a neurological disorder; that for 75 percent of children who stutter, “the issue won’t follow them to adulthood”; that there’s still disagreement over whether “disfluency” is a matter of language or motor control, because “the research is still a bit of a mess.” © 2023 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 28643 - Posted: 01.27.2023

Holly Else An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. “I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28629 - Posted: 01.14.2023

Xiaofan Lei What comes to mind when you think of someone who stutters? Is that person male or female? Are they weak and nervous, or powerful and heroic? If you have a choice, would you like to marry them, introduce them to your friends or recommend them for a job? Your attitudes toward people who stutter may depend partly on what you think causes stuttering. If you think that stuttering is due to psychological causes, such as being nervous, research suggests that you are more likely to distance yourself from those who stutter and view them more negatively. I am a person who stutters and a doctoral candidate in speech, language and hearing sciences. Growing up, I tried my best to hide my stuttering and to pass as fluent. I avoided sounds and words that I might stutter on. I avoided ordering the dishes I wanted to eat at the school cafeteria to avoid stuttering. I asked my teacher to not call on me in class because I didn’t want to deal with the laughter from my classmates when they heard my stutter. Those experiences motivated me to investigate stuttering so that I can help people who stutter, including myself, to better cope with the condition. Get facts about the coronavirus pandemic and the latest research In writing about what the scientific field has to say about stuttering and its biological causes, I hope I can reduce the stigma and misunderstanding surrounding the disorder. The most recognizable characteristics of developmental stuttering are the repetitions, prolongations and blocks in people’s speech. People who stutter may also experience muscle tension during speech and exhibit secondary behaviors, such as tics and grimaces. © 2010–2023, The Conversation US, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28626 - Posted: 01.12.2023

By Jonathan Moens An artificial intelligence can decode words and sentences from brain activity with surprising — but still limited — accuracy. Using only a few seconds of brain activity data, the AI guesses what a person has heard. It lists the correct answer in its top 10 possibilities up to 73 percent of the time, researchers found in a preliminary study. The AI’s “performance was above what many people thought was possible at this stage,” says Giovanni Di Liberto, a computer scientist at Trinity College Dublin who was not involved in the research. Developed at the parent company of Facebook, Meta, the AI could eventually be used to help thousands of people around the world unable to communicate through speech, typing or gestures, researchers report August 25 at arXiv.org. That includes many patients in minimally conscious, locked-in or “vegetative states” — what’s now generally known as unresponsive wakefulness syndrome (SN: 2/8/19). Most existing technologies to help such patients communicate require risky brain surgeries to implant electrodes. This new approach “could provide a viable path to help patients with communication deficits … without the use of invasive methods,” says neuroscientist Jean-Rémi King, a Meta AI researcher currently at the École Normale Supérieure in Paris. King and his colleagues trained a computational tool to detect words and sentences on 56,000 hours of speech recordings from 53 languages. The tool, also known as a language model, learned how to recognize specific features of language both at a fine-grained level — think letters or syllables — and at a broader level, such as a word or sentence. © Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 28470 - Posted: 09.10.2022

Jason Bruck Bottlenose dolphins’ signature whistles just passed an important test in animal psychology. A new study by my colleagues and me has shown that these animals may use their whistles as namelike concepts. By presenting urine and the sounds of signature whistles to dolphins, my colleagues Vincent Janik, Sam Walmsey and I recently showed that these whistles act as representations of the individuals who own them, similar to human names. For behavioral biologists like us, this is an incredibly exciting result. It is the first time this type of representational naming has been found in any other animal aside from humans. When you hear your friend’s name, you probably picture their face. Likewise, when you smell a friend’s perfume, that can also elicit an image of the friend. This is because humans build mental pictures of each other using more than just one sense. All of the different information from your senses that is associated with a person converges to form a mental representation of that individual - a name with a face, a smell and many other sensory characteristics. Within the first few months of life, dolphins invent their own specific identity calls – called signature whistles. Dolphins often announce their location to or greet other individuals in a pod by sending out their own signature whistles. But researchers have not known if, when a dolphin hears the signature whistle of a dolphin they are familiar with, they actively picture the calling individual. My colleagues and I were interested in determining if dolphin calls are representational in the same way human names invoke many thoughts of an individual. Because dolphins cannot smell, they rely principally on signature whistles to identify each other in the ocean. Dolphins can also copy another dolphin’s whistles as a way to address each other. My previous research showed that dolphins have great memory for each other’s whistles, but scientists argued that a dolphin might hear a whistle, know it sounds familiar, but not remember who the whistle belongs to. My colleagues and I wanted to determine if dolphins could associate signature whistles with the specific owner of that whistle. This would address whether or not dolphins remember and hold representations of other dolphins in their minds. © 2010–2022, The Conversation US, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28441 - Posted: 08.24.2022

By Oliver Whang Read this sentence aloud, if you’re able. As you do, a cascade of motion begins, forcing air from your lungs through two muscles, which vibrate, sculpting sound waves that pass through your mouth and into the world. These muscles are called vocal cords, or vocal folds, and their vibrations form the foundations of the human voice. They also speak to the emergence and evolution of human language. For several years, a team of scientists based mainly in Japan used imaging technology to study the physiology of the throats of 43 species of primates, from baboons and orangutans to macaques and chimpanzees, as well as humans. All the species but one had a similar anatomical structure: an extra set of protruding muscles, called vocal membranes or vocal lips, just above the vocal cords. The exception was Homo sapiens. The researchers also found that the presence of vocal lips destabilized the other primates’ voices, rendering their tone and timbre more chaotic and unpredictable. Animals with vocal lips have a more grating, less controlled baseline of communication, the study found; humans, lacking the extra membranes, can exchange softer, more stable sounds. The findings were published on Thursday in the journal Science. “It’s an interesting little nuance, this change to the human condition,” said Drew Rendall, a biologist at the University of New Brunswick who was not involved in the research. “The addition, if you want to think of it this way, is actually a subtraction.” That many primates have vocal lips has long been known, but their role in communication has not been entirely clear. In 1984, Sugio Hayama, a biologist at Kyoto University, videotaped the inside of a chimpanzee’s throat to study its reflexes under anesthesia. The video also happened to capture a moment when the chimp woke and began hollering, softly at first, then with more power. Decades later, Takeshi Nishimura, a former student of Dr. Hayama and now a biologist at Kyoto University and the principal investigator of the recent research, studied the footage with renewed interest. He found that the chimp’s vocal lips and vocal cords were vibrating together, which added a layer of mechanical complexity to the chimp’s voice that made it difficult to fine-tune. © 2022 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28426 - Posted: 08.11.2022

ByVirginia Morell Babies don’t babble to sound cute—they’re taking their first steps on the path to learning language. Now, a study shows parrot chicks do the same. Although the behavior has been seen in songbirds and two mammalian species, finding it in these birds is important, experts say, as they may provide the best nonhuman model for studying how we begin to learn language. The find is “exciting,” says Irene Pepperberg, a comparative psychologist at Hunter College not involved with the work. Pepperberg herself discovered something like babbling in a famed African gray parrot named Alex, which she studied for more than 30 years. By unearthing the same thing in another parrot species and in the wild, she says, the team has shown this ability is widespread in the birds. In this study, the scientists focused on green-rumped parrotlets (Forpus passerinus)—a smaller species than Alex, found from Venezuela to Brazil. The team investigated a population at Venezuela’s Hato Masaguaral research center, where scientists maintain more than 100 artificial nesting boxes. Like other parrots, songbirds, and humans (and a few other mammal species), parrotlets are vocal learners. They master their calls by listening and mimicking what they hear. The chicks in the new study started to babble at 21 days, according to camcorders installed in a dozen of their nests. They increased the complexity of their sounds dramatically over the next week, the scientists report today in the Proceedings of the Royal Society B. The baby birds uttered strings of soft peeps, clicks, and grrs, but they weren’t communicating with their siblings or parents, says lead author Rory Eggleston, a Ph.D. student at Utah State University. Rather, like a human infant babbling quietly in their crib, a parrotlet chick made the sounds alone (see video). Indeed, most chicks started their babbling bouts when their siblings were asleep, often doing so without even opening their beaks, says Eggleston, who spent hours analyzing videos of the birds. © 2022 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28343 - Posted: 06.01.2022

By Laura Sanders Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not. That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says. But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active. Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says. In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14. Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 15: Language and Lateralization
Link ID: 28307 - Posted: 04.30.2022

By Katharine Q. Seelye Ursula Bellugi, a pioneer in the study of the biological foundations of language who was among the first to demonstrate that sign language was just as complex, abstract and systematic as spoken language, died on Sunday in San Diego. She was 91. Her death, at an assisted living facility, was confirmed by her son Rob Klima. Dr. Bellugi was a leading researcher at the Salk Institute for Biological Studies in San Diego for nearly five decades and, for much of that time, was director of its laboratory for cognitive neuroscience. She made significant contributions in three main areas: the development of language in children; the linguistic structure and neurological basis of American Sign Language; and the social behavior and language abilities of people with a rare genetic disorder, Williams syndrome. “She leaves an indelible legacy of shedding light on how humans communicate and socialize with each other,” Rusty Gage, president of the Salk Institute, said in a statement. Dr. Bellugi’s work, much of it done in collaboration with her husband, Edward S. Klima, advanced understanding of the brain and the origins of language, both signed and spoken. American Sign Language was first described as a true language in 1960 by William C. Stokoe Jr., a professor at Gallaudet University, the world’s only liberal arts university devoted to deaf people. But he was ridiculed and attacked for that claim. Dr. Bellugi and Dr. Klima, who died in 2008, demonstrated conclusively that the world’s signed languages — of which there are more than 100 — were actual languages in their own right, not just translations of spoken languages. Dr. Bellugi, who focused on American Sign Language, established that these linguistic systems were passed down, in all their complexity, from one generation of deaf people to the next. For that reason, the scientific community regards her as the founder of the neurobiology of American Sign Language. The couple’s work led to a major discovery at the Salk lab: that the left hemisphere of the brain has an innate predisposition for language, whether spoken or signed. That finding gave scientists fresh insight into how the brain learns, interprets and forgets language. © 2022 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28296 - Posted: 04.23.2022

ByTess Joosse My dog Leo clearly knows the difference between my voice and the barks of the beagle next door. When I speak, he looks at me with love; when our canine neighbor makes his mind known, Leo barks back with disdain. A new study backs up what I and my fellow dog owners have long suspected: Dogs’ brains process human and canine vocalizations differently, suggesting they evolved to recognize our voices from their own. “The fact that dogs use auditory information alone to distinguish between human and dog sound is significant,” says Jeffrey Katz, a cognitive neuroscientist at Auburn University who is not involved with the work. Previous research has found that dogs can match human voices with expressions. When played an audio clip of a lady laughing, for example, they’ll often look at a photo of a smiling woman. But how exactly the canine brain processes sounds isn’t clear. MRI has shown certain regions of the dog brain are more active when a pup hears another dog whine or bark. But those images can’t reveal exactly when neurons in the brain are firing, and whether they fire differently in response to different noises. So in the new study, Anna Bálint, a canine neuroscientist at Eötvös Loránd University, turned to an electroencephalogram, which can measure individual brain waves. She and her colleagues recruited 17 family dogs, including several border collies, golden retrievers, and a German shepherd, that were previously taught to lie still for several minutes at a time. The scientists attached electrodes to each dog’s head to record its brain response—not an easy task, it turns out. Unlike humans’ bony noggins, dog heads have lots of muscles that can obstruct a clear readout, Bálint says. © 2022 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28270 - Posted: 04.06.2022

By Jessica Contrera The carpet cleaner heaves his machine up the stairs, untangles its hoses and promises to dump the dirty water only in the approved toilet. Another day scrubbing rugs for less than $20 an hour. Another Washington area house with overflowing bookshelves and walls covered in travel mementos from places he would love to go one day. But this was not that day. “Tell me about this stain,” 46-year-old Vaughn Smith asks his clients. “Well,” says one of the homeowners, “Schroeder rubbed his bottom across it.” Vaughn knows just what to do about that, and the couple, Courtney Stamm and Kelly Widelska, know they can trust him to do it. They’d been hiring him for years, once watching him erase even a splattered Pepto Bismol stain. But this time when Vaughn called to confirm their January appointment, he quietly explained that there was something about himself that he’d never told them. That he rarely told anyone. And well, a reporter was writing a story about it. Could he please bring her along? Now as they listen to Vaughn discuss the porousness of wool, and the difference between Scotchgard and sanitizer, they can’t help but look at him differently. Once the stool stain is solved, Kelly just has to ask. “So, how many languages do you speak?” “Oh goodness,” Vaughn says. “Eight, fluently.” “Eight?” Kelly marvels. “Eight,” Vaughn confirms. English, Spanish, Bulgarian, Czech, Portuguese, Romanian, Russian and Slovak. “But if you go by like, different grades of how much conversation,” he explains, “I know about 25 more.” Vaughn glances at me. He is still underselling his abilities. By his count, it is actually 37 more languages, with at least 24 he speaks well enough to carry on lengthy conversations. He can read and write in eight alphabets and scripts. He can tell stories in Italian and Finnish and American Sign Language. He’s teaching himself Indigenous languages, from Mexico’s Nahuatl © 1996-2022 The Washington Post

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 28269 - Posted: 04.06.2022

Jonathan Franklin Legendary actor Bruce Willis announced Wednesday his departure from the big screen following his diagnosis with aphasia, which is "impacting his cognitive abilities," his family said in a statement. While details of what led to Willis' aphasia diagnosis are unknown at this time, medical experts stress the importance of the brain condition and how its specifically treated — depending on its severity. "[At some point], people will know somebody who's had a stroke and has aphasia," Dr. Swathi Kiran, professor of neurorehabilitation at Boston University, told NPR. Bruce Willis stepping away from acting for health reasons, his family says Aphasia is defined as a condition that affects the ability to speak, write and understand language, according to the Mayo Clinic. The brain disorder can occur after strokes or head injuries — and can even lead in some cases to dementia. "As a result of this and with much consideration Bruce is stepping away from the career that has meant so much to him," his daughter, Rumer Willis, said on Instagram. "This is a really challenging time for our family and we are so appreciative of your continued love, compassion and support." Medical experts say the impacts of aphasia can vary, depending on the person's diagnosis. But mainly, the condition affects a person's ability to communicate — whether it's written, spoken or both. People living with aphasia can experience changes in their ability to communicate; as they may find difficulty finding words, using words out of order or will even speak in a short manner, according to the American Speech-Language-Hearing Association. © 2022 npr

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28264 - Posted: 04.02.2022

By Bruce Bower Human language, in its many current forms, may owe an evolutionary debt to our distant ape ancestors who sounded off in groups of scattered individuals. Wild orangutans’ social worlds mold how they communicate vocally, much as local communities shape the way people speak, researchers report March 21 in Nature Ecology & Evolution. This finding suggests that social forces began engineering an expanding inventory of communication sounds among ancient ancestors of apes and humans, laying a foundation for the evolution of language, say evolutionary psychologist Adriano Lameira, of the University of Warwick in England, and his colleagues. Lameira’s group recorded predator-warning calls known as “kiss-squeaks” — which typically involve drawing in breath through pursed lips — of 76 orangutans from six populations living on the islands of Borneo and Sumatra, where they face survival threats (SN: 2/15/18). The team tracked the animals and estimated their population densities from 2005 through 2010, with at least five consecutive months of observations and recordings in each population. Analyses of recordings then revealed how much individuals’ kiss-squeaks changed or remained the same over time. Orangutans in high-density populations, which up the odds of frequent social encounters, concoct many variations of kiss-squeaks, the researchers report. Novel reworkings of kiss-squeaks usually get modified further by other orangutans or drop out of use in crowded settings, they say. © Society for Science & the Public 2000–2022.

Related chapters from BN: Chapter 6: Evolution of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28258 - Posted: 03.30.2022