Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 712

David Farrier Charles Darwin suggested that humans learned to speak by mimicking birdsong: our ancestors’ first words may have been a kind of interspecies exchange. Perhaps it won’t be long before we join the conversation once again. The race to translate what animals are saying is heating up, with riches as well as a place in history at stake. The Jeremy Coller Foundation has promised $10m to whichever researchers can crack the code. This is a race fuelled by generative AI; large language models can sort through millions of recorded animal vocalisations to find their hidden grammars. Most projects focus on cetaceans because, like us, they learn through vocal imitation and, also like us, they communicate via complex arrangements of sound that appear to have structure and hierarchy. Sperm whales communicate in codas – rapid sequences of clicks, each as brief as 1,000th of a second. Project Ceti (the Cetacean Translation Initiative) is using AI to analyse codas in order to reveal the mysteries of sperm whale speech. There is evidence the animals take turns, use specific clicks to refer to one another, and even have distinct dialects. Ceti has already isolated a click that may be a form of punctuation, and they hope to speak whaleish as soon as 2026. The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals’ interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another’s native vocabulary. The prospect of speaking dolphin or whale is irresistible. And it seems that they are just as enthusiastic. In November last year, scientists in Alaska recorded an acoustic “conversation” with a humpback whale called Twain, in which they exchanged a call-and-response form known as “whup/throp” with the animal over a 20-minute period. In Florida, a dolphin named Zeus was found to have learned to mimic the vowel sounds, A, E, O, and U. © 2025 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29821 - Posted: 06.04.2025

Sofia Marie Haley I approach a flock of mountain chickadees feasting on pine nuts. A cacophony of sounds, coming from the many different bird species that rely on the Sierra Nevada’s diverse pine cone crop, fill the crisp mountain air. The strong “chick-a-dee” call sticks out among the bird vocalizations. The chickadees are communicating to each other about food sources – and my approach. Mountain chickadees are a member of the family Paridae, which is known for its complex vocal communication systems and cognitive abilities. Along with my advisers, behavioral ecologists Vladimir Pravosudov and Carrie Branch, I’m studying mountain chickadees at our study site in Sagehen Experimental Forest, outside of Truckee, California, for my doctoral research. I am focusing on how these birds convey a variety of information with their calls. The chilly autumn air on top of the mountain reminds me that it will soon be winter. It is time for the mountain chickadees to leave the socially monogamous partnerships they had while raising their chicks to form larger flocks. Forming social groups is not always simple; young chickadees are joining new flocks, and social dynamics need to be established before the winter storms arrive. I can hear them working this out vocally. There’s an unusual variety of complex calls, with melodic “gargle calls” at the forefront, coming from individuals announcing their dominance over other flock members. Examining and decoding bird calls is becoming an increasingly popular field of study, as scientists like me are discovering that many birds – including mountain chickadees – follow systematic rules to share important information, stringing together syllables like words in a sentence. © 2010–2025, The Conversation US, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29807 - Posted: 05.28.2025

By Erin Wayman Barbara J. King remembers the first time she met Kanzi the bonobo. It was the late 1990s, and the ape was living in a research center in Georgia. King walked in and told Kanzi she had a present. A small, round object created a visible outline in the front pocket of her jeans. Kanzi picked up a board checkered with colorful symbols and pointed to the one meaning “egg” and then to “question.” An egg? No, not an egg. A ball. But “he asked an on-point question, and even an extremely simple conversation was just amazing,” says King, a biolog­ical anthropologist at William & Mary in Williamsburg, Va. Born in 1980, Kanzi began learn­ing to communicate with symbols as an infant. He ultimately mastered more than 300 symbols, combined them in novel ways and understood spoken English. Kanzi was arguably the most accomplished among a cohort of “talking” apes that scientists in­tensely studied to understand the origins of language and to probe the ape mind. He was also the last of his kind. In March, Kanzi died. “It’s not just Kanzi that is gone; it’s this whole field of inquiry,” says comparative psychologist Heidi Lyn of the University of South Alabama in Mobile. Lyn had worked with Kanzi on and off for 30 years. Kanzi’s death offers an opportu­nity to reflect on what decades of ape-language experiments taught us — and at what cost. A history of ape-language experiments Language — communication marked by using symbols, grammar and syntax — has long been consid­ered among the abilities that make humans unique. And when it comes to delineating the exact boundary separating us from other animals, scientists often turn to our closest living relatives, the great apes. © Society for Science & the Public 2000–2025.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29797 - Posted: 05.21.2025

By Mikael Angelo Francisco A comic explains the highs and lows of birdsong Mikael Angelo Francisco is a science journalist and illustrator from the Philippines who enjoys writing about paleontology, biodiversity, environment conservation, and science in pop culture. He has written and edited books about media literacy, Filipino scientists, and science trivia. © 2025 NautilusNext Inc.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29794 - Posted: 05.21.2025

By Christa Lesté-Lasserre Can a robot arm wave hello to a cuttlefish—and get a hello back? Could a dolphin’s whistle actually mean “Where are you?” And are monkeys quietly naming each other while we fail to notice? These are just a few of the questions tackled by the finalists for this year’s Dolittle prize, a $100,000 award recognizing early breakthroughs in artificial intelligence (AI)-powered interspecies communication. The winning project—announced today—explores how dolphins use shared, learned whistles that may carry specific meanings—possibly even warning each other about danger, or just expressing confusion. The other contending teams—working with marmosets, cuttlefish, and nightingales—are also pushing the boundaries of what human-animal communication might look like. The prize marks an important milestone in the Coller Dolittle Challenge, a 5-year competition offering up to $10 million to the first team that can achieve genuine two-way communication with animals. “Part of how this initiative was born came from my skepticism,” says Yossi Yovel, a neuroecologist at Tel Aviv University and one of the prize’s organizers. “But we really have much better tools now. So this is the time to revisit a lot of our previous assumptions about two-way communication within the animal’s own world.” Science caught up with the four finalists to hear how close we really are to cracking the animal code. This interview has been edited for clarity and length. Cuttlefish (Sepia officinalis and S. bandensis) lack ears and voices, but they apparently make up for this with a kind of sign language. When shown videos of comrades waving their arms, they wave back.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29788 - Posted: 05.17.2025

By Jake Buehler Grunts, barks, screams and pants ring through Taï National Park in Cȏte d’Ivoire. Chimpanzees there combine these different calls like linguistic Legos to relay complex meanings when communicating, researchers report May 9 in Science Advances. Chimps can combine and flexibly rearrange pairs of sounds to convey different ideas or meanings, an ability that investigators have not documented in other nonhuman animals. This system may represent a key evolutionary transition between vocal communication strategies of other animals and the syntax rules that structure human languages. “The difference between human language and how other animals communicate is really about how we combine sounds to form words, and how we combine words to form sentences,” says Cédric Girard-Buttoz, an evolutionary biologist at CNRS in Lyon, France. Chimpanzees (Pan troglodytes) were known to have a particularly complicated vocal repertoire, with about a dozen single sounds that they can combine into hundreds of sequences. But it was unclear if the apes used multiple approaches when combining sounds to make new meanings, like in human language. In 2019 and 2020, Girard-Buttoz and his colleagues recorded 53 different adult chimpanzees living in the Taï forest. In all, the team analyzed over 4,300 sounds and described 16 different “bigrams” — short sequences of two sounds, like a grunt followed by a bark, or a panted hoo followed by a scream. The team then used statistical analyses to map those bigrams to behaviors to reveal some of the bigrams’ meanings. The result? Chimpanzees don’t combine sounds in a single, consistent way. They have at least four different methods — a first seen outside of humans. © Society for Science & the Public 2000–2025

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29781 - Posted: 05.11.2025

By Rachel Lehmann-Haupt On a brisk January evening this year, I was speeding down I–295 in northeast Florida, under a full moon, to visit my dad’s brain. As I drove past shadowy cypress swamps, sinewy river estuaries, and gaudy-hued billboards of condominiums with waterslides and red umbrellas boasting, “Best place to live in Florida,” I was aware of the strangeness of my visit. Most people pay respects to their loved ones at memorials and grave sites, but I was intensely driven to check in on the last remaining physical part of my dad, immortalized in what seemed like the world’s most macabre library. Michael DeTure, a professor of neuroscience, stepped out of a golf cart to meet me. “Welcome to the bunker. Just 8,000 of your quietest friends in here,” he said in a melodic southern drawl, grinning in a way that told me he’s made this joke before. The bunker is an indiscriminate warehouse, part of the Mayo Clinic’s Jacksonville, Florida campus that houses its brain bank. DeTure opened the warehouse door, and I was met with a blast of cold air. In the back of the warehouse sat rows of buzzing white freezers. DeTure pointed to the freezer where my dad’s brain sat in a drawer in a plastic bag with his name written on it in black Sharpie pen. I welled up with tears and a feeling of intense fear. The room suddenly felt too cold, too sterile, too bright, and my head started to spin. I wanted to run away from this place. And then my brain escaped for me. I saw my dad on a beach on Cape Cod in 1977. He was in a bathing suit, shirtless, lying on a towel. I was 7 years old and snuggled up to him to protect myself from the wind. He was reading aloud to my mom and me from Evelyn Waugh’s novel, A Handful of Dust, whose title is from T.S. Eliot’s poem, “The Wasteland”: “I will show you fear in a handful of dust.” He was reading the part about Tony Last, an English gentleman, being imprisoned by an eccentric recluse who forces him to read Dickens endlessly. © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29776 - Posted: 05.07.2025

By Michael Erard In many Western societies, parents eagerly await their children’s first words, then celebrate their arrival. There’s also a vast scientific and popular attention to early child language. Yet there is (and was) surprisingly little hullabaloo sparked by the first words and hand signs displayed by great apes. WHAT I LEFT OUT is a recurring feature in which book authors are invited to share anecdotes and narratives that, for whatever reason, did not make it into their final manuscripts. In this installment, author and linguist Michael Erard shares a story that didn’t make it into his recent book “Bye Bye I Love You: The Story of Our First and Last Words” (MIT Press, 344 pages.) As far back as 1916, scientists have been exploring the linguistic abilities of humans’ closest relatives by raising them in language-rich environments. But the first moments in which these animals did cross a communication threshold created relatively little fuss in both the scientific literature and the media. Why? Consider, for example, the first sign by Washoe, a young chimpanzee that was captured in the wild and transported in 1966 to a laboratory at the University of Nevada, where she was studied by two researchers, Allen Gardner and Beatrice Gardner. Washoe was taught American Sign Language in family-like settings that would be conducive to communicative situations. “Her human companions,” wrote the Gardners in 1969, “were to be friends and playmates as well as providers and protectors, and they were to introduce a great many games and activities that would be likely to result in maximum interaction.” When the Gardners wrote about the experiments, they did note her first uses of specific signs, such as “toothbrush,” that didn’t seem to echo a sign a human had just used. These moments weren’t ignored, yet you have to pay very close attention to their writings to find the slightest awe or enthusiasm. Fireworks it is not.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29753 - Posted: 04.23.2025

By Carl Zimmer After listening to hundreds of hours of ape calls, a team of scientists say they have detected a hallmark of human language: the ability to put together strings of sounds to create new meanings. The provocative finding, published Thursday in the journal Science, drew praise from some scholars and skepticism from others. Federica Amici, a primatologist at the University of Leipzig in Germany, said that the study helped place the roots of language even further back in time, to millions of years before the emergence of our species. “Differences between humans and other primates, including in communication, are far less distinct and well-defined than we have long assumed,” Dr. Amici said. But other researchers said that the study, which had been conducted on bonobos, close relatives of chimpanzees, had little to reveal about how we use words. “The present findings don’t tell us anything about the evolution of language,” said Johan Bolhuis, a neurobiologist at Utrecht University in the Netherlands. Many species can communicate with sounds. But when an animal makes a sound, it typically means just one thing. Monkeys, for instance, can make one warning call in reference to a leopard and a different one for an incoming eagle flying. In contrast, we humans can string words together in ways that combine their individual meanings into something new. Suppose I say, “I am a bad dancer.” When I combine the words “bad” and “dancer,” I no longer mean them independently; I’m not saying, “I am a bad person who also happens to dance.” Instead, I mean that I don’t dance well. Linguists call this compositionality, and have long considered it an essential ingredient of language. “It’s the force behind language’s creativity and productivity,” said Simon Townsend, a comparative psychologist at the University of Zurich in Switzerland. “Theoretically, you can come up with any phrase that has never been uttered before.” © 2025 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29730 - Posted: 04.05.2025

Miryam Naddaf A brain-reading implant that translates neural signals into audible speech has allowed a woman with paralysis to hear what she intends to say nearly instantly. Researchers enhanced the device — known as a brain–computer interface (BCI) — with artificial intelligence (AI) algorithms that decoded sentences as the woman thought of them, and then spoke them out loud using a synthetic voice. Unlike previous efforts, which could produce sounds only after users finished an entire sentence, the current approach can simultaneously detect words and turn them into speech within 3 seconds. The findings, published in Nature Neuroscience on 31 March1, represent a big step towards BCIs that are of practical use. Older speech-generating BCIs are similar to “a WhatsApp conversation”, says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved with the work. “I write a sentence, you write a sentence and you need some time to write a sentence again,” he says. “It just doesn’t flow like a normal conversation.” BCIs that stream speech in real time are “the next level” in research because they allow users to convey the tone and emphasis that are characteristic of natural speech, he adds. The study participant, Ann, lost her ability to speak after a stroke in her brainstem in 2005. Some 18 years later, she underwent a surgery to place a paper-thin rectangle containing 253 electrodes on the surface of her brain cortex. The implant can record the combined activity of thousands of neurons at the same time. Researchers personalized the synthetic voice to sound like Ann’s own voice from before her injury, by training AI algorithms on recordings from her wedding video. During the latest study, Ann silently mouthed 100 sentences from a set of 1,024 words and 50 phrases that appeared on a screen. The BCI device captured her neural signals every 80 milliseconds, starting 500 milliseconds before Ann started to silently say the sentences. It produced between 47 and 90 words per minute (natural conversation happens at around 160 words per minute).

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29726 - Posted: 04.02.2025

Nell Greenfieldboyce Putting the uniquely human version of a certain gene into mice changed the way that those animals vocalized to each other, suggesting that this gene may play a role in speech and language. Mice make a lot of calls in the ultrasonic range that humans can't hear, and the high-frequency vocalizations made by the genetically altered mice were more complex and showed more variation than those made by normal mice, according to a new study in the journal Nature Communications. The fact that the genetic change produced differences in vocal behavior was "really exciting," says Erich Jarvis, a scientist at Rockefeller University in New York who worked on this research. Still, he cautioned, "I don't think that one gene is going to be responsible — poof! — and you've got spoken language." For years, scientists have been trying to find the different genes that may have been involved in the evolution of speech, as language is one of the key features that sets humans apart from the rest of the animal kingdom. "There are other genes implicated in language that have not been human-specific," says Robert Darnell, a neuroscientist and physician at Rockefeller University, noting that one gene called FOXP2 has been linked to speech disorders. He was interested in a different gene called NOVA1, which he has studied for over two decades. NOVA1 is active in the brain, where it produces a protein that can affect the activity of other genes. NOVA1 is found in living creatures from mammals to birds, but humans have a unique variant. Yoko Tajima, a postdoctoral associate in Darnell's lab, led an effort to put this variant into mice, to see what effect it would have. © 2025 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29678 - Posted: 02.19.2025

By Emily Anthes The English language is full of wonderful words, from “anemone” and “aurora” to “zenith” and “zodiac.” But these are special occasion words, sprinkled sparingly into writing and conversation. The words in heaviest rotation are short and mundane. And they follow a remarkable statistical rule, which is universal across human languages: The most common word, which in English is “the,” is used about twice as frequently as the second most common word (“of,” in English), three times as frequently as the third most common word (“and”), continuing in that pattern. Now, an international, interdisciplinary team of scientists has found that the intricate songs of humpback whales, which can spread rapidly from one population to another, follow the same rule, which is known as Zipf’s law. The scientists are careful to note that whale song is not equivalent to human language. But the findings, they argue, suggest that forms of vocal communication that are complex and culturally transmitted may have shared structural properties. “We expect them to evolve to be easy to learn,” said Simon Kirby, an expert on language evolution at the University of Edinburgh and an author of the new study. The results were published on Thursday in the journal Science. “We think of language as this culturally evolving system that has to essentially be passed on by its hosts, which are humans,” Dr. Kirby added. “What’s so gratifying for me is to see that same logic seems to also potentially apply to whale song.” Zipf’s law, which was named for the linguist George Kingsley Zipf, holds that in any given language the frequency of a word is inversely proportional to its rank. There is still considerable debate over why this pattern exists and how meaningful it is. But some research suggests that this kind of skewed word distribution can make language easier to learn. © 2025 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29662 - Posted: 02.08.2025

By Avery Schuyler Nunn Migratory songbirds may talk to one another more than we thought as they wing through the night. Each fall, hundreds of millions of birds from dozens of species co-migrate, some of them making dangerous journeys across continents. Come spring, they return home. Scientists have long believed that these songbirds rely on instinct and experience alone to make the trek. But new research from a team of ornithologists at the University of Illinois suggests they may help one another out—even across species—through their nocturnal calls. “They broadcast vocal pings into the sky, potentially sharing information about who they are and what lies ahead,” says ornithologist Benjamin Van Doren of the University of Illinois, Urbana-Champaign and a co-author of the study, published in Current Biology. Using ground-based microphones across 26 sites in eastern North America, Van Doren and his team recorded over 18,300 hours of nocturnal flight calls from 27 different species of birds—brief, high-pitched vocalizations that some warblers, thrushes, and sparrows emit while flying. To process the enormous dataset of calls, they used machine-learning tools, including a customized version of Merlin, the Cornell Lab of Ornithology’s bird-call identification app. The analysis revealed that birds of different species were flying in close proximity and calling to one another in repeated patterns that suggested a kind of code. Flight proximity was closest between migrating songbirds species that made similar calls in pitch and rhythm, traveled at similar speeds, and had similar wing shapes. © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29661 - Posted: 02.08.2025

By Janna Levin It’s fair to say that enjoyment of a podcast would be severely limited without the human capacity to create and understand speech. That capacity has often been cited as a defining characteristic of our species, and one that sets us apart in the long history of life on Earth. Yet we know that other species communicate in complex ways. Studies of the neurological foundations of language suggest that birdsong, or communication among bats or elephants, originates with brain structures similar to our own. So why do some species vocalize while others don’t? In this episode, Erich Jarvis, who studies behavior and neurogenetics at the Rockefeller University, chats with Janna Levin about the surprising connections between human speech, birdsong and dance. JANNA LEVIN: All animals exhibit some form of communication, from the primitive hiss of a lizard to the complex gestures natural to chimps, or the songs shared by whales. But human language does seem exceptional, a vast and discrete cognitive leap. Yet recent research is finding surprising neurological connections between our expressive speech and the types of communication innate to other animals, giving us new ideas about the biological and developmental origins of language. Erich is a professor at the Rockefeller University and a Howard Hughes Medical Institute investigator. At Rockefeller, he directs the Field Research Center of Ethology and Ecology. He also directs the Neurogenetics Lab of Language and codirects the Vertebrate Genome Lab, where he studies song-learning birds and other species to gain insight into the mechanism’s underlying language and vocal learning. ERICH JARVIS: So, the first part: Language is built-in genetically in us humans. We’re born with the capacity to learn how to produce and how to understand language, and pass it on culturally from one generation to the next. The actual detail is learned, but the actual plan in the brain is there. Second part of your question: Is it, you know, special or unique to humans? It is specialized in humans, but certainly many components of what gives rise to language is not unique to humans. There’s a spectrum of abilities out there in other species that we share some aspects of with other species. © 2024 Simons Foundation

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29572 - Posted: 11.23.2024

Nicola Davis Science correspondent Whether it is news headlines or WhatsApp messages, modern humans are inundated with short pieces of text. Now researchers say they have unpicked how we get their gist in a single glance. Prof Liina Pylkkanen, co-author of the study from New York University, said most theories of language processing assume words are understood one by one, in sequence, before being combined to yield the meaning of the whole sentence. “From this perspective, at-a-glance language processing really shouldn’t work since there’s just not enough time for all the sequential processing of words and their combination into a larger representation,” she said. However, the research offers fresh insights, revealing we can detect certain sentence structures in as little as 125 milliseconds (ms) – a timeframe similar to the blink of an eye. Pylkkanen said: “We don’t yet know exactly how this ultrafast structure detection is possible, but the general hypothesis is that when something you perceive fits really well with what you know about – in this case, we’re talking about knowledge of the grammar – this top-down knowledge can help you identify the stimulus really fast. “So just like your own car is quickly identifiable in a parking lot, certain language structures are quickly identifiable and can then give rise to a rapid effect of syntax in the brain.” The team say the findings suggest parallels with the way in which we perceive visual scenes, with Pylkkanen noting the results could have practical uses for the designers of digital media, as well as advertisers and designers of road signs. Writing in the journal Science Advances, Pylkkanen and colleagues report how they used a non-invasive scanning device to measure the brain activity of 36 participants. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 29527 - Posted: 10.26.2024

By Katarina Zimmer Adriana Weisleder knows well the benefits of being bilingual: being able to communicate with one’s community, cultivating connection with one’s heritage culture, contributing to the richness and diversity of society, and opening up professional opportunities. Research also suggests some cognitive benefits of bilingualism — such as improved multitasking — although those are more debated, says Weisleder, a developmental psychologist and language scientist of Costa Rican heritage who directs the Child Language Lab at Northwestern University near Chicago. Nearly 22 percent of Americans speak a language other than English at home; many of them are English and Spanish speakers from immigrant families. Yet many children from immigrant families in the United States struggle to develop or maintain proficiency in two languages. Some may lose their heritage language in favor of English; others may fall behind in schools where their progress is evaluated only in English. In a 2020 article in the Annual Review of Developmental Psychology, Weisleder and educational psychologist Meredith Rowe explain how a person’s environment — at a family, community and societal level — affects language acquisition. In the US, for instance, language development in children from immigrant families is influenced by parental misconceptions about raising children bilingually, a general scarcity of support for bilinguals in schools, and anti-immigrant sentiment in society more broadly. In her research, Weisleder leads in-depth studies of bilingual toddlers in different social contexts to better understand how they comprehend and learn multiple languages. She hopes her insights will help to dispel misconceptions and fears around bilingualism and improve support for children learning multiple languages.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 29526 - Posted: 10.26.2024

By Christa Lesté-Lasserre Even if your cat hasn’t gotten your tongue, it’s most likely getting your words. Without any particular training, the animals—like human babies—appear to pick up basic human language skills just by listening to us talk. Indeed, cats learn to associate images with words even faster than babies do, according to a study published this month in Scientific Reports. That means that, despite all appearances to the contrary, our furtive feline friends may actually be listening to what we say. Cats have a long history with us—about 10,000 years at last count—notes Brittany Florkiewicz, an evolutionary psychologist at Lyon College who was not involved in the work. “So it makes sense that they can learn these types of associations.” Scientists have discovered a lot about how cats respond to human language in the past 5 years. In 2019, a team in Tokyo showed that cats “know” their names, responding to them by moving their heads and ears in a particular way. In 2022, some of the same researchers demonstrated that the animals can “match” photos of their human and feline family members to their respective names. “I was very surprised, because that meant cats were able to eavesdrop on human conversations and understand words without any special reward-based training,” says Saho Takagi, a comparative cognitive scientist at Azabu University and member of the 2022 study. She wondered: Are cats “hard-wired” to learn human language? To find out, Takagi and some of her former teammates gave 31 adult pet cats—including 23 that were up for adoption at cat cafés—a type of word test designed for human babies. The scientists propped each kitty in front of a laptop and showed the animals two 9-second animated cartoon images while broadcasting audio tracks of their caregivers saying a made-up word four times. The researchers played the nonsense word “keraru” while a growing and shrinking blue-and-white unicorn appeared on the screen, or “parumo” while a red-faced cartoon Sun grew and shrank. The cats watched and heard these sequences until they got bored—signaled by a 50% drop in eye contact with the screen.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29521 - Posted: 10.19.2024

By Carl Zimmer After analyzing decades-old videos of captive chimpanzees, scientists have concluded that the animals could utter a human word: “mama.” It’s not exactly the expansive dialogue in this year’s “Kingdom of the Planet of the Apes.” But the finding, published on Thursday in the journal Scientific Reports, may offer some important clues as to how speech evolved. The researchers argue that our common ancestors with chimpanzees had brains already equipped with some of the building blocks needed for talking. Adriano Lameira, an evolutionary psychologist at the University of Warwick in Britain and one of the authors of the study, said that the ability to speak is perhaps the most important feature that sets us apart from other animals. Talking to each other allowed early humans to cooperate and amass knowledge over generations. “It is the only trait that explains why we’ve been able to change the face of the earth,” Dr. Lameira said. “We would be an unremarkable ape without it.” Scientists have long wondered why we can speak and other apes cannot. Beginning in the early 1900s, that curiosity led to a series of odd — and cruel — experiments. A few researchers tried raising apes in their own homes to see if living with humans could lead the young animals to speak. In 1947, for example, the psychologist Keith Hayes and his wife, Catherine, adopted an infant chimpanzee. They named her Viki, and, when she was five months old, they started teaching her words. After two years of training, the couple later claimed, Viki could say “papa,” “mama,” “up” and “cup.” By the 1980s, many scientists had dismissed the experiences of Viki and other adopted apes. For one, separating babies from their mothers was likely traumatic. “It’s not the sort of thing you could fund anymore, and with good reason,” said Axel Ekstrom, a speech scientist at the KTH Royal Institute of Technology in Stockholm. © 2024 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29408 - Posted: 07.27.2024

By Cathleen O’Grady Human conversations are rapid-fire affairs, with mere milliseconds passing between one person’s utterance and their partner’s response. This speedy turn taking is universal across cultures—but now it turns out that chimpanzees do it, too. By analyzing thousands of gestures from chimpanzees in five different communities in East Africa, researchers found that the animals take turns while communicating, and do so as quickly as we do. The speedy gestural conversations are also seen across chimp communities, just like in humans, the authors report today in Current Biology. The finding is “very exciting” says Maël Leroux, an evolutionary biologist at the University of Rennes who was not involved with the work. “Language is the hallmark of our species … and a central feature of language is our ability to take turns.” Finding a similar behavior in our closest living relative, he says, suggests we may have inherited this ability from our shared common ancestor. When chimps gesture—such as reaching out an arm in a begging gesture—they are most often making a request, says Gal Badihi, an animal communication researcher at the University of St Andrews. This can include things such as “groom me,” “give me,” or “travel with me.” Most of the time, the chimp’s partner does the requested behavior. But sometimes, the second chimp will respond with its own gestures instead—for instance, one chimp requesting grooming, and the other indicating where they would like to be groomed, essentially saying “groom me first.” To figure out whether these interactions resemble human turn taking, Badihi and colleagues combed through hundreds of hours of footage from a massive database of chimpanzee gestural interactions recorded by multiple researchers across decades of fieldwork in East Africa. The scientists studied the footage, describing the precise movements each chimp made when gesturing, the response of other chimps, the duration of the gestures, and other details. © 2024 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29403 - Posted: 07.23.2024

By Sara Reardon By eavesdropping on the brains of living people, scientists have created the highest-resolution map yet of the neurons that encode the meanings of various words1. The results hint that, across individuals, the brain uses the same standard categories to classify words — helping us to turn sound into sense. The study is based on words only in English. But it’s a step along the way to working out how the brain stores words in its language library, says neurosurgeon Ziv Williams at the Massachusetts Institute of Technology in Cambridge. By mapping the overlapping sets of brain cells that respond to various words, he says, “we can try to start building a thesaurus of meaning”. The brain area called the auditory cortex processes the sound of a word as it enters the ear. But it is the brain’s prefrontal cortex, a region where higher-order brain activity takes place, that works out a word’s ‘semantic meaning’ — its essence or gist. Previous research2 has studied this process by analysing images of blood flow in the brain, which is a proxy for brain activity. This method allowed researchers to map word meaning to small regions of the brain. But Williams and his colleagues found a unique opportunity to look at how individual neurons encode language in real time. His group recruited ten people about to undergo surgery for epilepsy, each of whom had had electrodes implanted in their brains to determine the source of their seizures. The electrodes allowed the researchers to record activity from around 300 neurons in each person’s prefrontal cortex. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29383 - Posted: 07.06.2024