Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 702

Nell Greenfieldboyce Putting the uniquely human version of a certain gene into mice changed the way that those animals vocalized to each other, suggesting that this gene may play a role in speech and language. Mice make a lot of calls in the ultrasonic range that humans can't hear, and the high-frequency vocalizations made by the genetically altered mice were more complex and showed more variation than those made by normal mice, according to a new study in the journal Nature Communications. The fact that the genetic change produced differences in vocal behavior was "really exciting," says Erich Jarvis, a scientist at Rockefeller University in New York who worked on this research. Still, he cautioned, "I don't think that one gene is going to be responsible — poof! — and you've got spoken language." For years, scientists have been trying to find the different genes that may have been involved in the evolution of speech, as language is one of the key features that sets humans apart from the rest of the animal kingdom. "There are other genes implicated in language that have not been human-specific," says Robert Darnell, a neuroscientist and physician at Rockefeller University, noting that one gene called FOXP2 has been linked to speech disorders. He was interested in a different gene called NOVA1, which he has studied for over two decades. NOVA1 is active in the brain, where it produces a protein that can affect the activity of other genes. NOVA1 is found in living creatures from mammals to birds, but humans have a unique variant. Yoko Tajima, a postdoctoral associate in Darnell's lab, led an effort to put this variant into mice, to see what effect it would have. © 2025 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29678 - Posted: 02.19.2025

By Emily Anthes The English language is full of wonderful words, from “anemone” and “aurora” to “zenith” and “zodiac.” But these are special occasion words, sprinkled sparingly into writing and conversation. The words in heaviest rotation are short and mundane. And they follow a remarkable statistical rule, which is universal across human languages: The most common word, which in English is “the,” is used about twice as frequently as the second most common word (“of,” in English), three times as frequently as the third most common word (“and”), continuing in that pattern. Now, an international, interdisciplinary team of scientists has found that the intricate songs of humpback whales, which can spread rapidly from one population to another, follow the same rule, which is known as Zipf’s law. The scientists are careful to note that whale song is not equivalent to human language. But the findings, they argue, suggest that forms of vocal communication that are complex and culturally transmitted may have shared structural properties. “We expect them to evolve to be easy to learn,” said Simon Kirby, an expert on language evolution at the University of Edinburgh and an author of the new study. The results were published on Thursday in the journal Science. “We think of language as this culturally evolving system that has to essentially be passed on by its hosts, which are humans,” Dr. Kirby added. “What’s so gratifying for me is to see that same logic seems to also potentially apply to whale song.” Zipf’s law, which was named for the linguist George Kingsley Zipf, holds that in any given language the frequency of a word is inversely proportional to its rank. There is still considerable debate over why this pattern exists and how meaningful it is. But some research suggests that this kind of skewed word distribution can make language easier to learn. © 2025 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29662 - Posted: 02.08.2025

By Avery Schuyler Nunn Migratory songbirds may talk to one another more than we thought as they wing through the night. Each fall, hundreds of millions of birds from dozens of species co-migrate, some of them making dangerous journeys across continents. Come spring, they return home. Scientists have long believed that these songbirds rely on instinct and experience alone to make the trek. But new research from a team of ornithologists at the University of Illinois suggests they may help one another out—even across species—through their nocturnal calls. “They broadcast vocal pings into the sky, potentially sharing information about who they are and what lies ahead,” says ornithologist Benjamin Van Doren of the University of Illinois, Urbana-Champaign and a co-author of the study, published in Current Biology. Using ground-based microphones across 26 sites in eastern North America, Van Doren and his team recorded over 18,300 hours of nocturnal flight calls from 27 different species of birds—brief, high-pitched vocalizations that some warblers, thrushes, and sparrows emit while flying. To process the enormous dataset of calls, they used machine-learning tools, including a customized version of Merlin, the Cornell Lab of Ornithology’s bird-call identification app. The analysis revealed that birds of different species were flying in close proximity and calling to one another in repeated patterns that suggested a kind of code. Flight proximity was closest between migrating songbirds species that made similar calls in pitch and rhythm, traveled at similar speeds, and had similar wing shapes. © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29661 - Posted: 02.08.2025

By Janna Levin It’s fair to say that enjoyment of a podcast would be severely limited without the human capacity to create and understand speech. That capacity has often been cited as a defining characteristic of our species, and one that sets us apart in the long history of life on Earth. Yet we know that other species communicate in complex ways. Studies of the neurological foundations of language suggest that birdsong, or communication among bats or elephants, originates with brain structures similar to our own. So why do some species vocalize while others don’t? In this episode, Erich Jarvis, who studies behavior and neurogenetics at the Rockefeller University, chats with Janna Levin about the surprising connections between human speech, birdsong and dance. JANNA LEVIN: All animals exhibit some form of communication, from the primitive hiss of a lizard to the complex gestures natural to chimps, or the songs shared by whales. But human language does seem exceptional, a vast and discrete cognitive leap. Yet recent research is finding surprising neurological connections between our expressive speech and the types of communication innate to other animals, giving us new ideas about the biological and developmental origins of language. Erich is a professor at the Rockefeller University and a Howard Hughes Medical Institute investigator. At Rockefeller, he directs the Field Research Center of Ethology and Ecology. He also directs the Neurogenetics Lab of Language and codirects the Vertebrate Genome Lab, where he studies song-learning birds and other species to gain insight into the mechanism’s underlying language and vocal learning. ERICH JARVIS: So, the first part: Language is built-in genetically in us humans. We’re born with the capacity to learn how to produce and how to understand language, and pass it on culturally from one generation to the next. The actual detail is learned, but the actual plan in the brain is there. Second part of your question: Is it, you know, special or unique to humans? It is specialized in humans, but certainly many components of what gives rise to language is not unique to humans. There’s a spectrum of abilities out there in other species that we share some aspects of with other species. © 2024 Simons Foundation

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29572 - Posted: 11.23.2024

Nicola Davis Science correspondent Whether it is news headlines or WhatsApp messages, modern humans are inundated with short pieces of text. Now researchers say they have unpicked how we get their gist in a single glance. Prof Liina Pylkkanen, co-author of the study from New York University, said most theories of language processing assume words are understood one by one, in sequence, before being combined to yield the meaning of the whole sentence. “From this perspective, at-a-glance language processing really shouldn’t work since there’s just not enough time for all the sequential processing of words and their combination into a larger representation,” she said. However, the research offers fresh insights, revealing we can detect certain sentence structures in as little as 125 milliseconds (ms) – a timeframe similar to the blink of an eye. Pylkkanen said: “We don’t yet know exactly how this ultrafast structure detection is possible, but the general hypothesis is that when something you perceive fits really well with what you know about – in this case, we’re talking about knowledge of the grammar – this top-down knowledge can help you identify the stimulus really fast. “So just like your own car is quickly identifiable in a parking lot, certain language structures are quickly identifiable and can then give rise to a rapid effect of syntax in the brain.” The team say the findings suggest parallels with the way in which we perceive visual scenes, with Pylkkanen noting the results could have practical uses for the designers of digital media, as well as advertisers and designers of road signs. Writing in the journal Science Advances, Pylkkanen and colleagues report how they used a non-invasive scanning device to measure the brain activity of 36 participants. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 29527 - Posted: 10.26.2024

By Katarina Zimmer Adriana Weisleder knows well the benefits of being bilingual: being able to communicate with one’s community, cultivating connection with one’s heritage culture, contributing to the richness and diversity of society, and opening up professional opportunities. Research also suggests some cognitive benefits of bilingualism — such as improved multitasking — although those are more debated, says Weisleder, a developmental psychologist and language scientist of Costa Rican heritage who directs the Child Language Lab at Northwestern University near Chicago. Nearly 22 percent of Americans speak a language other than English at home; many of them are English and Spanish speakers from immigrant families. Yet many children from immigrant families in the United States struggle to develop or maintain proficiency in two languages. Some may lose their heritage language in favor of English; others may fall behind in schools where their progress is evaluated only in English. In a 2020 article in the Annual Review of Developmental Psychology, Weisleder and educational psychologist Meredith Rowe explain how a person’s environment — at a family, community and societal level — affects language acquisition. In the US, for instance, language development in children from immigrant families is influenced by parental misconceptions about raising children bilingually, a general scarcity of support for bilinguals in schools, and anti-immigrant sentiment in society more broadly. In her research, Weisleder leads in-depth studies of bilingual toddlers in different social contexts to better understand how they comprehend and learn multiple languages. She hopes her insights will help to dispel misconceptions and fears around bilingualism and improve support for children learning multiple languages.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 29526 - Posted: 10.26.2024

By Christa Lesté-Lasserre Even if your cat hasn’t gotten your tongue, it’s most likely getting your words. Without any particular training, the animals—like human babies—appear to pick up basic human language skills just by listening to us talk. Indeed, cats learn to associate images with words even faster than babies do, according to a study published this month in Scientific Reports. That means that, despite all appearances to the contrary, our furtive feline friends may actually be listening to what we say. Cats have a long history with us—about 10,000 years at last count—notes Brittany Florkiewicz, an evolutionary psychologist at Lyon College who was not involved in the work. “So it makes sense that they can learn these types of associations.” Scientists have discovered a lot about how cats respond to human language in the past 5 years. In 2019, a team in Tokyo showed that cats “know” their names, responding to them by moving their heads and ears in a particular way. In 2022, some of the same researchers demonstrated that the animals can “match” photos of their human and feline family members to their respective names. “I was very surprised, because that meant cats were able to eavesdrop on human conversations and understand words without any special reward-based training,” says Saho Takagi, a comparative cognitive scientist at Azabu University and member of the 2022 study. She wondered: Are cats “hard-wired” to learn human language? To find out, Takagi and some of her former teammates gave 31 adult pet cats—including 23 that were up for adoption at cat cafés—a type of word test designed for human babies. The scientists propped each kitty in front of a laptop and showed the animals two 9-second animated cartoon images while broadcasting audio tracks of their caregivers saying a made-up word four times. The researchers played the nonsense word “keraru” while a growing and shrinking blue-and-white unicorn appeared on the screen, or “parumo” while a red-faced cartoon Sun grew and shrank. The cats watched and heard these sequences until they got bored—signaled by a 50% drop in eye contact with the screen.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29521 - Posted: 10.19.2024

By Carl Zimmer After analyzing decades-old videos of captive chimpanzees, scientists have concluded that the animals could utter a human word: “mama.” It’s not exactly the expansive dialogue in this year’s “Kingdom of the Planet of the Apes.” But the finding, published on Thursday in the journal Scientific Reports, may offer some important clues as to how speech evolved. The researchers argue that our common ancestors with chimpanzees had brains already equipped with some of the building blocks needed for talking. Adriano Lameira, an evolutionary psychologist at the University of Warwick in Britain and one of the authors of the study, said that the ability to speak is perhaps the most important feature that sets us apart from other animals. Talking to each other allowed early humans to cooperate and amass knowledge over generations. “It is the only trait that explains why we’ve been able to change the face of the earth,” Dr. Lameira said. “We would be an unremarkable ape without it.” Scientists have long wondered why we can speak and other apes cannot. Beginning in the early 1900s, that curiosity led to a series of odd — and cruel — experiments. A few researchers tried raising apes in their own homes to see if living with humans could lead the young animals to speak. In 1947, for example, the psychologist Keith Hayes and his wife, Catherine, adopted an infant chimpanzee. They named her Viki, and, when she was five months old, they started teaching her words. After two years of training, the couple later claimed, Viki could say “papa,” “mama,” “up” and “cup.” By the 1980s, many scientists had dismissed the experiences of Viki and other adopted apes. For one, separating babies from their mothers was likely traumatic. “It’s not the sort of thing you could fund anymore, and with good reason,” said Axel Ekstrom, a speech scientist at the KTH Royal Institute of Technology in Stockholm. © 2024 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29408 - Posted: 07.27.2024

By Cathleen O’Grady Human conversations are rapid-fire affairs, with mere milliseconds passing between one person’s utterance and their partner’s response. This speedy turn taking is universal across cultures—but now it turns out that chimpanzees do it, too. By analyzing thousands of gestures from chimpanzees in five different communities in East Africa, researchers found that the animals take turns while communicating, and do so as quickly as we do. The speedy gestural conversations are also seen across chimp communities, just like in humans, the authors report today in Current Biology. The finding is “very exciting” says Maël Leroux, an evolutionary biologist at the University of Rennes who was not involved with the work. “Language is the hallmark of our species … and a central feature of language is our ability to take turns.” Finding a similar behavior in our closest living relative, he says, suggests we may have inherited this ability from our shared common ancestor. When chimps gesture—such as reaching out an arm in a begging gesture—they are most often making a request, says Gal Badihi, an animal communication researcher at the University of St Andrews. This can include things such as “groom me,” “give me,” or “travel with me.” Most of the time, the chimp’s partner does the requested behavior. But sometimes, the second chimp will respond with its own gestures instead—for instance, one chimp requesting grooming, and the other indicating where they would like to be groomed, essentially saying “groom me first.” To figure out whether these interactions resemble human turn taking, Badihi and colleagues combed through hundreds of hours of footage from a massive database of chimpanzee gestural interactions recorded by multiple researchers across decades of fieldwork in East Africa. The scientists studied the footage, describing the precise movements each chimp made when gesturing, the response of other chimps, the duration of the gestures, and other details. © 2024 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29403 - Posted: 07.23.2024

By Sara Reardon By eavesdropping on the brains of living people, scientists have created the highest-resolution map yet of the neurons that encode the meanings of various words1. The results hint that, across individuals, the brain uses the same standard categories to classify words — helping us to turn sound into sense. The study is based on words only in English. But it’s a step along the way to working out how the brain stores words in its language library, says neurosurgeon Ziv Williams at the Massachusetts Institute of Technology in Cambridge. By mapping the overlapping sets of brain cells that respond to various words, he says, “we can try to start building a thesaurus of meaning”. The brain area called the auditory cortex processes the sound of a word as it enters the ear. But it is the brain’s prefrontal cortex, a region where higher-order brain activity takes place, that works out a word’s ‘semantic meaning’ — its essence or gist. Previous research2 has studied this process by analysing images of blood flow in the brain, which is a proxy for brain activity. This method allowed researchers to map word meaning to small regions of the brain. But Williams and his colleagues found a unique opportunity to look at how individual neurons encode language in real time. His group recruited ten people about to undergo surgery for epilepsy, each of whom had had electrodes implanted in their brains to determine the source of their seizures. The electrodes allowed the researchers to record activity from around 300 neurons in each person’s prefrontal cortex. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29383 - Posted: 07.06.2024

By Carl Zimmer For thousands of years, philosophers have argued about the purpose of language. Plato believed it was essential for thinking. Thought “is a silent inner conversation of the soul with itself,” he wrote. Many modern scholars have advanced similar views. Starting in the 1960s, Noam Chomsky, a linguist at M.I.T., argued that we use language for reasoning and other forms of thought. “If there is a severe deficit of language, there will be severe deficit of thought,” he wrote. As an undergraduate, Evelina Fedorenko took Dr. Chomsky’s class and heard him describe his theory. “I really liked the idea,” she recalled. But she was puzzled by the lack of evidence. “A lot of things he was saying were just stated as if they were facts — the truth,” she said. Dr. Fedorenko went on to become a cognitive neuroscientist at M.I.T., using brain scanning to investigate how the brain produces language. And after 15 years, her research has led her to a startling conclusion: We don’t need language to think. “When you start evaluating it, you just don’t find support for this role of language in thinking,” she said. When Dr. Fedorenko began this work in 2009, studies had found that the same brain regions required for language were also active when people reasoned or carried out arithmetic. But Dr. Fedorenko and other researchers discovered that this overlap was a mirage. Part of the trouble with the early results was that the scanners were relatively crude. Scientists made the most of their fuzzy scans by combining the results from all their volunteers, creating an overall average of brain activity. © 2024 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 29376 - Posted: 07.03.2024

By Amanda Heidt For the first time, a brain implant has helped a bilingual person who is unable to articulate words to communicate in both of his languages. An artificial-intelligence (AI) system coupled to the brain implant decodes, in real time, what the individual is trying to say in either Spanish or English. The findings1, published on 20 May in Nature Biomedical Engineering, provide insights into how our brains process language, and could one day lead to long-lasting devices capable of restoring multilingual speech to people who can’t communicate verbally. “This new study is an important contribution for the emerging field of speech-restoration neuroprostheses,” says Sergey Stavisky, a neuroscientist at the University of California, Davis, who was not involved in the study. Even though the study included only one participant and more work remains to be done, “there’s every reason to think that this strategy will work with higher accuracy in the future when combined with other recent advances”, Stavisky says. The person at the heart of the study, who goes by the nickname Pancho, had a stroke at age 20 that paralysed much of his body. As a result, he can moan and grunt but cannot speak clearly. In his thirties, Pancho partnered with Edward Chang, a neurosurgeon at the University of California, San Francisco, to investigate the stroke’s lasting effects on his brain. In a groundbreaking study published in 20212, Chang’s team surgically implanted electrodes on Pancho’s cortex to record neural activity, which was translated into words on a screen. Pancho’s first sentence — ‘My family is outside’ — was interpreted in English. But Pancho is a native Spanish speaker who learnt English only after his stroke. It’s Spanish that still evokes in him feelings of familiarity and belonging. “What languages someone speaks are actually very linked to their identity,” Chang says. “And so our long-term goal has never been just about replacing words, but about restoring connection for people.” © 2024 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29321 - Posted: 05.23.2024

By Emily Anthes Half a century ago, one of the hottest questions in science was whether humans could teach animals to talk. Scientists tried using sign language to converse with apes and trained parrots to deploy growing English vocabularies. The work quickly attracted media attention — and controversy. The research lacked rigor, critics argued, and what seemed like animal communication could simply have been wishful thinking, with researchers unconsciously cuing their animals to respond in certain ways. In the late 1970s and early 1980s, the research fell out of favor. “The whole field completely disintegrated,” said Irene Pepperberg, a comparative cognition researcher at Boston University, who became known for her work with an African gray parrot named Alex. Today, advances in technology and a growing appreciation for the sophistication of animal minds have renewed interest in finding ways to bridge the species divide. Pet owners are teaching their dogs to press “talking buttons” and zoos are training their apes to use touch screens. In a cautious new paper, a team of scientists outlines a framework for evaluating whether such tools might give animals new ways to express themselves. The research is designed “to rise above some of the things that have been controversial in the past,” said Jennifer Cunha, a visiting research associate at Indiana University. The paper, which is being presented at a science conference on Tuesday, focuses on Ms. Cunha’s parrot, an 11-year-old Goffin’s cockatoo named Ellie. Since 2019, Ms. Cunha has been teaching Ellie to use an interactive “speech board,” a tablet-based app that contains more than 200 illustrated icons, corresponding to words and phrases including “sunflower seeds,” “happy” and “I feel hot.” When Ellie presses on an icon with her tongue, a computerized voice speaks the word or phrase aloud. In the new study, Ms. Cunha and her colleagues did not set out to determine whether Ellie’s use of the speech board amounted to communication. Instead, they used quantitative, computational methods to analyze Ellie’s icon presses to learn more about whether the speech board had what they called “expressive and enrichment potential.” © 2024 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29306 - Posted: 05.14.2024

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. At the lab, the owners were instructed to say words for objects before showing their dog either the correct item or a different one. For example, an owner might say “Look, here’s the ball”, but hold up a Frisbee instead. The experiments were repeated multiple times with matching and non-matching objects. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29214 - Posted: 03.26.2024

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29212 - Posted: 03.23.2024

By Cathleen O’Grady Why do some children learn to talk earlier than others? Linguists have pointed to everything from socioeconomic status to gender to the number of languages their parents speak. But a new study finds a simpler explanation. An analysis of nearly 40,000 hours of audio recordings from children around the world suggests kids speak more when the adults around them are more talkative, which may also give them a larger vocabulary early in life. Factors such as social class appear to make no difference, researchers report this month in the Proceedings of the National Academy of Sciences. The paper is a “wonderful, impactful, and much needed contribution to the literature,” says Ece Demir-Lira, a developmental scientist at the University of Iowa who was not involved in the work. By looking at real-life language samples from six different continents, she says, the study provides a global view of language development sorely lacking from the literature. Most studies on language learning have focused on children in Western, industrialized nations. To build a more representative data set, Harvard University developmental psychologist Elika Bergelson and her collaborators scoured the literature for studies that had used LENA devices: small audio recorders that babies can wear—tucked into a pocket on a specially made vest—for days at a time. These devices function as a kind of “talk pedometer,” with an algorithm that estimates how much its wearer speaks, as well as how much language they hear in their environment—from parents, other adults, and even siblings. The team asked 18 research groups across 12 countries whether they would share their data from the devices, leaving them with a whopping 2865 days of recordings from 1001 children. Many of the kids, who ranged from 2 months to 4 years old, were from English-speaking families, but the data also included speakers of Dutch, Spanish, Vietnamese, and Finnish, as well as Yélî Dnye (Papua New Guinea), Wolof (Senegal), and Tsimané (Bolivia). Combining these smaller data sets gave the researchers a more powerful, diverse sample.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 29061 - Posted: 12.22.2023

By Sonia Shah Can a mouse learn a new song? Such a question might seem whimsical. Though humans have lived alongside mice for at least 15,000 years, few of us have ever heard mice sing, because they do so in frequencies beyond the range detectable by human hearing. As pups, their high-pitched songs alert their mothers to their whereabouts; as adults, they sing in ultrasound to woo one another. For decades, researchers considered mouse songs instinctual, the fixed tunes of a windup music box, rather than the mutable expressions of individual minds. But no one had tested whether that was really true. In 2012, a team of neurobiologists at Duke University, led by Erich Jarvis, a neuroscientist who studies vocal learning, designed an experiment to find out. The team surgically deafened five mice and recorded their songs in a mouse-size sound studio, tricked out with infrared cameras and microphones. They then compared sonograms of the songs of deafened mice with those of hearing mice. If the mouse songs were innate, as long presumed, the surgical alteration would make no difference at all. Jarvis and his researchers slowed down the tempo and shifted the pitch of the recordings, so that they could hear the songs with their own ears. Those of the intact mice sounded “remarkably similar to some bird songs,” Jarvis wrote in a 2013 paper that described the experiment, with whistlelike syllables similar to those in the songs of canaries and the trills of dolphins. Not so the songs of the deafened mice: Deprived of auditory feedback, their songs became degraded, rendering them nearly unrecognizable. They sounded, the scientists noted, like “squawks and screams.” Not only did the tunes of a mouse depend on its ability to hear itself and others, but also, as the team found in another experiment, a male mouse could alter the pitch of its song to compete with other male mice for female attention. Inside these murine skills lay clues to a puzzle many have called “the hardest problem in science”: the origins of language. In humans, “vocal learning” is understood as a skill critical to spoken language. Researchers had already discovered the capacity for vocal learning in species other than humans, including in songbirds, hummingbirds, parrots, cetaceans such as dolphins and whales, pinnipeds such as seals, elephants and bats. But given the centuries-old idea that a deep chasm separated human language from animal communications, most scientists understood the vocal learning abilities of other species as unrelated to our own — as evolutionarily divergent as the wing of a bat is to that of a bee. The apparent absence of intermediate forms of language — say, a talking animal — left the question of how language evolved resistant to empirical inquiry. © 2023 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28921 - Posted: 09.21.2023

By R. Douglas Fields One day, while threading a needle to sew a button, I noticed that my tongue was sticking out. The same thing happened later, as I carefully cut out a photograph. Then another day, as I perched precariously on a ladder painting the window frame of my house, there it was again! What’s going on here? I’m not deliberately protruding my tongue when I do these things, so why does it keep making appearances? After all, it’s not as if that versatile lingual muscle has anything to do with controlling my hands. Right? Yet as I would learn, our tongue and hand movements are intimately interrelated at an unconscious level. This peculiar interaction’s deep evolutionary roots even help explain how our brain can function without conscious effort. A common explanation for why we stick out our tongue when we perform precision hand movements is something called motor overflow. In theory, it can take so much cognitive effort to thread a needle (or perform other demanding fine motor skills) that our brain circuits get swamped and impinge on adjacent circuits, activating them inappropriately. It’s certainly true that motor overflow can happen after neural injury or in early childhood when we are learning to control our bodies. But I have too much respect for our brains to buy that “limited brain bandwidth” explanation. How, then, does this peculiar hand-mouth cross-talk really occur? Tracing the neural anatomy of tongue and hand control to pinpoint where a short circuit might happen, we find first of all that the two are controlled by completely different nerves. This makes sense: A person who suffers a spinal cord injury that paralyzes their hands does not lose their ability to speak. That’s because the tongue is controlled by a cranial nerve, but the hands are controlled by spinal nerves. Simons Foundation

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 28894 - Posted: 08.30.2023

By McKenzie Prillaman When speaking to young kids, humans often use squeaky, high-pitched baby talk. It turns out that some dolphins do, too. Bottlenose dolphin moms modify their individually distinctive whistles when their babies are nearby, researchers report June 26 in the Proceedings of the National Academy of Sciences. This “parentese” might enhance attention, bonding and vocal learning in calves, as it seems to do in humans. During the first few months of life, each common bottlenose dolphin (Tursiops truncatus) develops a unique tune, or signature whistle, akin to a name (SN: 7/22/13). The dolphins shout out their own “names” in the water “likely as a way to keep track of each other,” says marine biologist Laela Sayigh of the Woods Hole But dolphin moms seem to tweak that tune in the presence of their calves, which tend to stick by mom’s side for three to six years. It’s a change that Sayigh first noticed in a 2009 study published by her student. But “it was just one little piece of this much larger study,” she says. To follow up on that observation, Sayigh and colleagues analyzed signature whistles from 19 female dolphins both with and without their babies close by. Audio recordings were captured from a wild population that lives near Sarasota Bay, Fla., during catch-and-release health assessments that occurred from 1984 to 2018. The researchers examined 40 instances of each dolphin’s signature whistle, verified by the unique way each vocalization’s frequencies change over time. Half of each dolphin’s whistles were voiced in the presence of her baby. When youngsters were around, the moms’ whistles contained, on average, a higher maximum and slightly lower minimum pitch compared with those uttered in the absence of calves, contributing to an overall widened pitch range. © Society for Science & the Public 2000–2023.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28835 - Posted: 06.28.2023

By Natalia Mesa Most people will learn one or two languages in their lives. But Vaughn Smith, a 47-year-old carpet cleaner from Washington, D.C., speaks 24. Smith is a hyperpolyglot—a rare individual who speaks more than 10 languages. In a new brain imaging study, researchers peered inside the minds of polyglots like Smith to tease out how language-specific regions in their brains respond to hearing different languages. Familiar languages elicited a stronger reaction than unfamiliar ones, they found, with one important exception: native languages, which provoked relatively little brain activity. This, the authors note, suggests there’s something special about the languages we learn early in life. This study “contributes to our understanding of how our brain learns new things,” says Augusto Buchweitz, a cognitive neuroscientist at the University of Connecticut, Storrs, who was not involved in the work. “The earlier you learn something, the more your brain [adapts] and probably uses less resources.” Scientists have largely ignored what’s going on inside the brains of polyglots—people who speak more than five languages—says Ev Fedorenko, a cognitive neuroscientist at the Massachusetts Institute of Technology who led the new study. “There’s oodles of work on individuals whose language systems are not functioning properly,” she says, but almost none on people with advanced language skills. That’s partly because they account for only 1% of people globally, making it difficult to find enough participants for research. But studying this group can help linguists understand the human “language network,” a set of specialized brain areas located in the left frontal and temporal lobes. These areas help humans with the most basic aspect of understanding language: connecting sounds with meaning, Fedorenko says.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28654 - Posted: 02.04.2023