Links for Keyword: Language
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By John Pavlus Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine. That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process. But what if our neurobiological reality includes a system that behaves something like an LLM? Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko (opens a new tab) began studying how language works in the adult human brain. The specialized system she has described, which she calls “the language network,” maps the correspondences between words and their meanings. Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains. “You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.” Fedorenko has been gathering biological evidence of this language network for the past 15 years in her lab at the Massachusetts Institute of Technology. Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess). Nor is the human language network particularly large: If all of its tissue were clumped together, it would be about the size of a strawberry (opens a new tab). But when it is damaged, the effect is profound. An injured language network can result in forms of aphasia (opens a new tab) in which sophisticated cognition remains intact but trapped within a brain unable to express it or distinguish incoming words from others. © 2025 Simons Foundation
Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 30043 - Posted: 12.06.2025
By Kathryn Hulick Dolphins whistle, humpback whales sing and sperm whales click. Now, a new analysis of sperm whale codas — a unique series of clicks — suggests a previously unrecognized acoustic pattern. The finding, reported November 12 in Open Mind, implies that the whales’ clicking communications might be more complex — and meaningful — than previously realized. But the study faces sharp criticism from marine biologists who argue that these patterns are more likely to be recording artifacts or by-products of alertness rather than language-like signals. For decades, biologists have known that both the number and timing of clicks in a coda matter and can even identify the clan of a sperm whale (Physeter macrocephalus). Sperm whales in the eastern Caribbean Sea off the coast of Dominica, for example, often use a series of two slow and three quick sounds: “click…click… click-click-click.” Relying on artificial intelligence and linguistics analysis, the new study finds that sometimes this series sounds more like “clack…clack… clack-clack-clack,” says Shane Gero, a marine biologist at Project CETI, a Dominica-based nonprofit studying sperm whale communication. Project CETI linguist Gašper Beguš wonders about the meanings a coda might convey. “It sounds really alien,” almost like Morse code, says Beguš, of the University of California, Berkeley. Based on his team’s result, he now speculates that sperm whales might use clicks or clacks “in a similar way as we use our vowels to transmit meaning.” Not everyone agrees with that assessment. The comparison to vowels is “completely nonsense,” says Luke Rendell, a marine biologist at the University of St. Andrews in Scotland who has studied sperm whales for more than 30 years. “There’s no evidence that the animals are responding in any way to this [new pattern].” © Society for Science & the Public 2000–2025
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 30013 - Posted: 11.15.2025
Katie Kavanagh Speaking multiple languages could slow down brain ageing and help to prevent cognitive decline, a study of more than 80,000 people has found. The work, published in Nature Aging on 10 November1, suggests that people who are multilingual are half as likely to show signs of accelerated biological ageing as are those who speak just one language. “We wanted to address one of the most persistent gaps in ageing research, which is if multilingualism can actually delay ageing,” says study co-author Agustín Ibáñez, a neuroscientist at the Adolfo Ibáñez University in Santiago, Chile. Previous research in this area has suggested that speaking multiple languages can improve cognitive functions such memory and attention2, which boosts brain health as we get older. But many of these studies rely on small sample sizes and use unreliable methods of measuring ageing, which leads to results that are inconsistent and not generalizable. “The effects of multilingualism on ageing have always been controversial, but I don’t think there has been a study of this scale before, which seems to demonstrate them quite decisively,” says Christos Pliatsikas, a cognitive neuroscientist at the University of Reading, UK. The paper’s results could “bring a step change to the field”, he adds. They might also “encourage people to go out and try to learn a second language, or keep that second language active”, says Susan Teubner-Rhodes, a cognitive psychologist at Auburn University in Alabama. © 2025 Springer Nature Limited
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 30005 - Posted: 11.12.2025
By Meghie Rodrigues Babies start processing language before they are born, a new study suggests. A research team in Montreal has found that newborns who had heard short stories in foreign languages while in the womb process those languages similarly to their native tongue. The study, published in August in Nature Communications Biology, is the first to use brain imaging to show what neuroscientists and psychologists had long suspected. Previous research had shown that fetuses and newborns can recognize familiar voices and rhythms and even that they prefer their native language soon after birth. But these findings come mostly from behavioral cues—sucking patterns, head turns or heart rate changes—rather than direct evidence from the brain. “We cannot say babies ‘learn’ a language prenatally,” says Anne Gallagher, a neuropsychologist at the University of Montreal and senior author of the study. What we can say, she adds, is that neonates develop familiarity with one or more languages during gestation, which shapes their brain networks at birth. The research team recruited 60 people for the experiment, all of them about 35 weeks into their pregnancy. Of those, 39 exposed their fetuses to 10 minutes of prerecorded stories in French (their native language) and another 10 minutes of the same stories in either Hebrew or German at least once every other day until birth. These languages were chosen because their acoustic and phonological properties are very distinctfrom French and from each other, explains co-lead author Andréanne René, a Ph.D. candidate in clinical neuropsychology at the University of Montreal. The other 21 participants were part of the control group; their fetuses were exposed to French in their natural environments, with no special input. © 2025 SCIENTIFIC AMERICAN
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 29959 - Posted: 10.08.2025
By Carl Zimmer For decades, neuroengineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or A.L.S., weakens the muscles in the airway. A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could instead record the brain’s electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published on Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. “It’s a fantastic advance,” Dr. Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. In 2023, after A.L.S. had made his voice unintelligible, Mr. Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. A computer recorded the electrical activity from the implants as Mr. Harrell attempted to say different words. Over time, with the help of artificial intelligence, the computer accurately predicted almost 6,000 words, with an accuracy of 97.5 percent. It could then synthesize those words using Mr. Harrell’s voice, based on recordings made before he developed A.L.S. © 2025 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 29892 - Posted: 08.16.2025
David Farrier Charles Darwin suggested that humans learned to speak by mimicking birdsong: our ancestors’ first words may have been a kind of interspecies exchange. Perhaps it won’t be long before we join the conversation once again. The race to translate what animals are saying is heating up, with riches as well as a place in history at stake. The Jeremy Coller Foundation has promised $10m to whichever researchers can crack the code. This is a race fuelled by generative AI; large language models can sort through millions of recorded animal vocalisations to find their hidden grammars. Most projects focus on cetaceans because, like us, they learn through vocal imitation and, also like us, they communicate via complex arrangements of sound that appear to have structure and hierarchy. Sperm whales communicate in codas – rapid sequences of clicks, each as brief as 1,000th of a second. Project Ceti (the Cetacean Translation Initiative) is using AI to analyse codas in order to reveal the mysteries of sperm whale speech. There is evidence the animals take turns, use specific clicks to refer to one another, and even have distinct dialects. Ceti has already isolated a click that may be a form of punctuation, and they hope to speak whaleish as soon as 2026. The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals’ interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another’s native vocabulary. The prospect of speaking dolphin or whale is irresistible. And it seems that they are just as enthusiastic. In November last year, scientists in Alaska recorded an acoustic “conversation” with a humpback whale called Twain, in which they exchanged a call-and-response form known as “whup/throp” with the animal over a 20-minute period. In Florida, a dolphin named Zeus was found to have learned to mimic the vowel sounds, A, E, O, and U. © 2025 Guardian News & Media Limited
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29821 - Posted: 06.04.2025
Sofia Marie Haley I approach a flock of mountain chickadees feasting on pine nuts. A cacophony of sounds, coming from the many different bird species that rely on the Sierra Nevada’s diverse pine cone crop, fill the crisp mountain air. The strong “chick-a-dee” call sticks out among the bird vocalizations. The chickadees are communicating to each other about food sources – and my approach. Mountain chickadees are a member of the family Paridae, which is known for its complex vocal communication systems and cognitive abilities. Along with my advisers, behavioral ecologists Vladimir Pravosudov and Carrie Branch, I’m studying mountain chickadees at our study site in Sagehen Experimental Forest, outside of Truckee, California, for my doctoral research. I am focusing on how these birds convey a variety of information with their calls. The chilly autumn air on top of the mountain reminds me that it will soon be winter. It is time for the mountain chickadees to leave the socially monogamous partnerships they had while raising their chicks to form larger flocks. Forming social groups is not always simple; young chickadees are joining new flocks, and social dynamics need to be established before the winter storms arrive. I can hear them working this out vocally. There’s an unusual variety of complex calls, with melodic “gargle calls” at the forefront, coming from individuals announcing their dominance over other flock members. Examining and decoding bird calls is becoming an increasingly popular field of study, as scientists like me are discovering that many birds – including mountain chickadees – follow systematic rules to share important information, stringing together syllables like words in a sentence. © 2010–2025, The Conversation US, Inc.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29807 - Posted: 05.28.2025
By Erin Wayman Barbara J. King remembers the first time she met Kanzi the bonobo. It was the late 1990s, and the ape was living in a research center in Georgia. King walked in and told Kanzi she had a present. A small, round object created a visible outline in the front pocket of her jeans. Kanzi picked up a board checkered with colorful symbols and pointed to the one meaning “egg” and then to “question.” An egg? No, not an egg. A ball. But “he asked an on-point question, and even an extremely simple conversation was just amazing,” says King, a biological anthropologist at William & Mary in Williamsburg, Va. Born in 1980, Kanzi began learning to communicate with symbols as an infant. He ultimately mastered more than 300 symbols, combined them in novel ways and understood spoken English. Kanzi was arguably the most accomplished among a cohort of “talking” apes that scientists intensely studied to understand the origins of language and to probe the ape mind. He was also the last of his kind. In March, Kanzi died. “It’s not just Kanzi that is gone; it’s this whole field of inquiry,” says comparative psychologist Heidi Lyn of the University of South Alabama in Mobile. Lyn had worked with Kanzi on and off for 30 years. Kanzi’s death offers an opportunity to reflect on what decades of ape-language experiments taught us — and at what cost. A history of ape-language experiments Language — communication marked by using symbols, grammar and syntax — has long been considered among the abilities that make humans unique. And when it comes to delineating the exact boundary separating us from other animals, scientists often turn to our closest living relatives, the great apes. © Society for Science & the Public 2000–2025.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29797 - Posted: 05.21.2025
By Mikael Angelo Francisco A comic explains the highs and lows of birdsong Mikael Angelo Francisco is a science journalist and illustrator from the Philippines who enjoys writing about paleontology, biodiversity, environment conservation, and science in pop culture. He has written and edited books about media literacy, Filipino scientists, and science trivia. © 2025 NautilusNext Inc.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29794 - Posted: 05.21.2025
By Christa Lesté-Lasserre Can a robot arm wave hello to a cuttlefish—and get a hello back? Could a dolphin’s whistle actually mean “Where are you?” And are monkeys quietly naming each other while we fail to notice? These are just a few of the questions tackled by the finalists for this year’s Dolittle prize, a $100,000 award recognizing early breakthroughs in artificial intelligence (AI)-powered interspecies communication. The winning project—announced today—explores how dolphins use shared, learned whistles that may carry specific meanings—possibly even warning each other about danger, or just expressing confusion. The other contending teams—working with marmosets, cuttlefish, and nightingales—are also pushing the boundaries of what human-animal communication might look like. The prize marks an important milestone in the Coller Dolittle Challenge, a 5-year competition offering up to $10 million to the first team that can achieve genuine two-way communication with animals. “Part of how this initiative was born came from my skepticism,” says Yossi Yovel, a neuroecologist at Tel Aviv University and one of the prize’s organizers. “But we really have much better tools now. So this is the time to revisit a lot of our previous assumptions about two-way communication within the animal’s own world.” Science caught up with the four finalists to hear how close we really are to cracking the animal code. This interview has been edited for clarity and length. Cuttlefish (Sepia officinalis and S. bandensis) lack ears and voices, but they apparently make up for this with a kind of sign language. When shown videos of comrades waving their arms, they wave back.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29788 - Posted: 05.17.2025
By Jake Buehler Grunts, barks, screams and pants ring through Taï National Park in Cȏte d’Ivoire. Chimpanzees there combine these different calls like linguistic Legos to relay complex meanings when communicating, researchers report May 9 in Science Advances. Chimps can combine and flexibly rearrange pairs of sounds to convey different ideas or meanings, an ability that investigators have not documented in other nonhuman animals. This system may represent a key evolutionary transition between vocal communication strategies of other animals and the syntax rules that structure human languages. “The difference between human language and how other animals communicate is really about how we combine sounds to form words, and how we combine words to form sentences,” says Cédric Girard-Buttoz, an evolutionary biologist at CNRS in Lyon, France. Chimpanzees (Pan troglodytes) were known to have a particularly complicated vocal repertoire, with about a dozen single sounds that they can combine into hundreds of sequences. But it was unclear if the apes used multiple approaches when combining sounds to make new meanings, like in human language. In 2019 and 2020, Girard-Buttoz and his colleagues recorded 53 different adult chimpanzees living in the Taï forest. In all, the team analyzed over 4,300 sounds and described 16 different “bigrams” — short sequences of two sounds, like a grunt followed by a bark, or a panted hoo followed by a scream. The team then used statistical analyses to map those bigrams to behaviors to reveal some of the bigrams’ meanings. The result? Chimpanzees don’t combine sounds in a single, consistent way. They have at least four different methods — a first seen outside of humans. © Society for Science & the Public 2000–2025
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29781 - Posted: 05.11.2025
By Rachel Lehmann-Haupt On a brisk January evening this year, I was speeding down I–295 in northeast Florida, under a full moon, to visit my dad’s brain. As I drove past shadowy cypress swamps, sinewy river estuaries, and gaudy-hued billboards of condominiums with waterslides and red umbrellas boasting, “Best place to live in Florida,” I was aware of the strangeness of my visit. Most people pay respects to their loved ones at memorials and grave sites, but I was intensely driven to check in on the last remaining physical part of my dad, immortalized in what seemed like the world’s most macabre library. Michael DeTure, a professor of neuroscience, stepped out of a golf cart to meet me. “Welcome to the bunker. Just 8,000 of your quietest friends in here,” he said in a melodic southern drawl, grinning in a way that told me he’s made this joke before. The bunker is an indiscriminate warehouse, part of the Mayo Clinic’s Jacksonville, Florida campus that houses its brain bank. DeTure opened the warehouse door, and I was met with a blast of cold air. In the back of the warehouse sat rows of buzzing white freezers. DeTure pointed to the freezer where my dad’s brain sat in a drawer in a plastic bag with his name written on it in black Sharpie pen. I welled up with tears and a feeling of intense fear. The room suddenly felt too cold, too sterile, too bright, and my head started to spin. I wanted to run away from this place. And then my brain escaped for me. I saw my dad on a beach on Cape Cod in 1977. He was in a bathing suit, shirtless, lying on a towel. I was 7 years old and snuggled up to him to protect myself from the wind. He was reading aloud to my mom and me from Evelyn Waugh’s novel, A Handful of Dust, whose title is from T.S. Eliot’s poem, “The Wasteland”: “I will show you fear in a handful of dust.” He was reading the part about Tony Last, an English gentleman, being imprisoned by an eccentric recluse who forces him to read Dickens endlessly. © 2025 NautilusNext Inc.,
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29776 - Posted: 05.07.2025
By Michael Erard In many Western societies, parents eagerly await their children’s first words, then celebrate their arrival. There’s also a vast scientific and popular attention to early child language. Yet there is (and was) surprisingly little hullabaloo sparked by the first words and hand signs displayed by great apes. WHAT I LEFT OUT is a recurring feature in which book authors are invited to share anecdotes and narratives that, for whatever reason, did not make it into their final manuscripts. In this installment, author and linguist Michael Erard shares a story that didn’t make it into his recent book “Bye Bye I Love You: The Story of Our First and Last Words” (MIT Press, 344 pages.) As far back as 1916, scientists have been exploring the linguistic abilities of humans’ closest relatives by raising them in language-rich environments. But the first moments in which these animals did cross a communication threshold created relatively little fuss in both the scientific literature and the media. Why? Consider, for example, the first sign by Washoe, a young chimpanzee that was captured in the wild and transported in 1966 to a laboratory at the University of Nevada, where she was studied by two researchers, Allen Gardner and Beatrice Gardner. Washoe was taught American Sign Language in family-like settings that would be conducive to communicative situations. “Her human companions,” wrote the Gardners in 1969, “were to be friends and playmates as well as providers and protectors, and they were to introduce a great many games and activities that would be likely to result in maximum interaction.” When the Gardners wrote about the experiments, they did note her first uses of specific signs, such as “toothbrush,” that didn’t seem to echo a sign a human had just used. These moments weren’t ignored, yet you have to pay very close attention to their writings to find the slightest awe or enthusiasm. Fireworks it is not.
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29753 - Posted: 04.23.2025
By Carl Zimmer After listening to hundreds of hours of ape calls, a team of scientists say they have detected a hallmark of human language: the ability to put together strings of sounds to create new meanings. The provocative finding, published Thursday in the journal Science, drew praise from some scholars and skepticism from others. Federica Amici, a primatologist at the University of Leipzig in Germany, said that the study helped place the roots of language even further back in time, to millions of years before the emergence of our species. “Differences between humans and other primates, including in communication, are far less distinct and well-defined than we have long assumed,” Dr. Amici said. But other researchers said that the study, which had been conducted on bonobos, close relatives of chimpanzees, had little to reveal about how we use words. “The present findings don’t tell us anything about the evolution of language,” said Johan Bolhuis, a neurobiologist at Utrecht University in the Netherlands. Many species can communicate with sounds. But when an animal makes a sound, it typically means just one thing. Monkeys, for instance, can make one warning call in reference to a leopard and a different one for an incoming eagle flying. In contrast, we humans can string words together in ways that combine their individual meanings into something new. Suppose I say, “I am a bad dancer.” When I combine the words “bad” and “dancer,” I no longer mean them independently; I’m not saying, “I am a bad person who also happens to dance.” Instead, I mean that I don’t dance well. Linguists call this compositionality, and have long considered it an essential ingredient of language. “It’s the force behind language’s creativity and productivity,” said Simon Townsend, a comparative psychologist at the University of Zurich in Switzerland. “Theoretically, you can come up with any phrase that has never been uttered before.” © 2025 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29730 - Posted: 04.05.2025
Miryam Naddaf A brain-reading implant that translates neural signals into audible speech has allowed a woman with paralysis to hear what she intends to say nearly instantly. Researchers enhanced the device — known as a brain–computer interface (BCI) — with artificial intelligence (AI) algorithms that decoded sentences as the woman thought of them, and then spoke them out loud using a synthetic voice. Unlike previous efforts, which could produce sounds only after users finished an entire sentence, the current approach can simultaneously detect words and turn them into speech within 3 seconds. The findings, published in Nature Neuroscience on 31 March1, represent a big step towards BCIs that are of practical use. Older speech-generating BCIs are similar to “a WhatsApp conversation”, says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved with the work. “I write a sentence, you write a sentence and you need some time to write a sentence again,” he says. “It just doesn’t flow like a normal conversation.” BCIs that stream speech in real time are “the next level” in research because they allow users to convey the tone and emphasis that are characteristic of natural speech, he adds. The study participant, Ann, lost her ability to speak after a stroke in her brainstem in 2005. Some 18 years later, she underwent a surgery to place a paper-thin rectangle containing 253 electrodes on the surface of her brain cortex. The implant can record the combined activity of thousands of neurons at the same time. Researchers personalized the synthetic voice to sound like Ann’s own voice from before her injury, by training AI algorithms on recordings from her wedding video. During the latest study, Ann silently mouthed 100 sentences from a set of 1,024 words and 50 phrases that appeared on a screen. The BCI device captured her neural signals every 80 milliseconds, starting 500 milliseconds before Ann started to silently say the sentences. It produced between 47 and 90 words per minute (natural conversation happens at around 160 words per minute).
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29726 - Posted: 04.02.2025
Nell Greenfieldboyce Putting the uniquely human version of a certain gene into mice changed the way that those animals vocalized to each other, suggesting that this gene may play a role in speech and language. Mice make a lot of calls in the ultrasonic range that humans can't hear, and the high-frequency vocalizations made by the genetically altered mice were more complex and showed more variation than those made by normal mice, according to a new study in the journal Nature Communications. The fact that the genetic change produced differences in vocal behavior was "really exciting," says Erich Jarvis, a scientist at Rockefeller University in New York who worked on this research. Still, he cautioned, "I don't think that one gene is going to be responsible — poof! — and you've got spoken language." For years, scientists have been trying to find the different genes that may have been involved in the evolution of speech, as language is one of the key features that sets humans apart from the rest of the animal kingdom. "There are other genes implicated in language that have not been human-specific," says Robert Darnell, a neuroscientist and physician at Rockefeller University, noting that one gene called FOXP2 has been linked to speech disorders. He was interested in a different gene called NOVA1, which he has studied for over two decades. NOVA1 is active in the brain, where it produces a protein that can affect the activity of other genes. NOVA1 is found in living creatures from mammals to birds, but humans have a unique variant. Yoko Tajima, a postdoctoral associate in Darnell's lab, led an effort to put this variant into mice, to see what effect it would have. © 2025 npr
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29678 - Posted: 02.19.2025
By Emily Anthes The English language is full of wonderful words, from “anemone” and “aurora” to “zenith” and “zodiac.” But these are special occasion words, sprinkled sparingly into writing and conversation. The words in heaviest rotation are short and mundane. And they follow a remarkable statistical rule, which is universal across human languages: The most common word, which in English is “the,” is used about twice as frequently as the second most common word (“of,” in English), three times as frequently as the third most common word (“and”), continuing in that pattern. Now, an international, interdisciplinary team of scientists has found that the intricate songs of humpback whales, which can spread rapidly from one population to another, follow the same rule, which is known as Zipf’s law. The scientists are careful to note that whale song is not equivalent to human language. But the findings, they argue, suggest that forms of vocal communication that are complex and culturally transmitted may have shared structural properties. “We expect them to evolve to be easy to learn,” said Simon Kirby, an expert on language evolution at the University of Edinburgh and an author of the new study. The results were published on Thursday in the journal Science. “We think of language as this culturally evolving system that has to essentially be passed on by its hosts, which are humans,” Dr. Kirby added. “What’s so gratifying for me is to see that same logic seems to also potentially apply to whale song.” Zipf’s law, which was named for the linguist George Kingsley Zipf, holds that in any given language the frequency of a word is inversely proportional to its rank. There is still considerable debate over why this pattern exists and how meaningful it is. But some research suggests that this kind of skewed word distribution can make language easier to learn. © 2025 The New York Times Company
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29662 - Posted: 02.08.2025
By Avery Schuyler Nunn Migratory songbirds may talk to one another more than we thought as they wing through the night. Each fall, hundreds of millions of birds from dozens of species co-migrate, some of them making dangerous journeys across continents. Come spring, they return home. Scientists have long believed that these songbirds rely on instinct and experience alone to make the trek. But new research from a team of ornithologists at the University of Illinois suggests they may help one another out—even across species—through their nocturnal calls. “They broadcast vocal pings into the sky, potentially sharing information about who they are and what lies ahead,” says ornithologist Benjamin Van Doren of the University of Illinois, Urbana-Champaign and a co-author of the study, published in Current Biology. Using ground-based microphones across 26 sites in eastern North America, Van Doren and his team recorded over 18,300 hours of nocturnal flight calls from 27 different species of birds—brief, high-pitched vocalizations that some warblers, thrushes, and sparrows emit while flying. To process the enormous dataset of calls, they used machine-learning tools, including a customized version of Merlin, the Cornell Lab of Ornithology’s bird-call identification app. The analysis revealed that birds of different species were flying in close proximity and calling to one another in repeated patterns that suggested a kind of code. Flight proximity was closest between migrating songbirds species that made similar calls in pitch and rhythm, traveled at similar speeds, and had similar wing shapes. © 2025 NautilusNext Inc.,
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29661 - Posted: 02.08.2025
By Janna Levin It’s fair to say that enjoyment of a podcast would be severely limited without the human capacity to create and understand speech. That capacity has often been cited as a defining characteristic of our species, and one that sets us apart in the long history of life on Earth. Yet we know that other species communicate in complex ways. Studies of the neurological foundations of language suggest that birdsong, or communication among bats or elephants, originates with brain structures similar to our own. So why do some species vocalize while others don’t? In this episode, Erich Jarvis, who studies behavior and neurogenetics at the Rockefeller University, chats with Janna Levin about the surprising connections between human speech, birdsong and dance. JANNA LEVIN: All animals exhibit some form of communication, from the primitive hiss of a lizard to the complex gestures natural to chimps, or the songs shared by whales. But human language does seem exceptional, a vast and discrete cognitive leap. Yet recent research is finding surprising neurological connections between our expressive speech and the types of communication innate to other animals, giving us new ideas about the biological and developmental origins of language. Erich is a professor at the Rockefeller University and a Howard Hughes Medical Institute investigator. At Rockefeller, he directs the Field Research Center of Ethology and Ecology. He also directs the Neurogenetics Lab of Language and codirects the Vertebrate Genome Lab, where he studies song-learning birds and other species to gain insight into the mechanism’s underlying language and vocal learning. ERICH JARVIS: So, the first part: Language is built-in genetically in us humans. We’re born with the capacity to learn how to produce and how to understand language, and pass it on culturally from one generation to the next. The actual detail is learned, but the actual plan in the brain is there. Second part of your question: Is it, you know, special or unique to humans? It is specialized in humans, but certainly many components of what gives rise to language is not unique to humans. There’s a spectrum of abilities out there in other species that we share some aspects of with other species. © 2024 Simons Foundation
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 29572 - Posted: 11.23.2024
Nicola Davis Science correspondent Whether it is news headlines or WhatsApp messages, modern humans are inundated with short pieces of text. Now researchers say they have unpicked how we get their gist in a single glance. Prof Liina Pylkkanen, co-author of the study from New York University, said most theories of language processing assume words are understood one by one, in sequence, before being combined to yield the meaning of the whole sentence. “From this perspective, at-a-glance language processing really shouldn’t work since there’s just not enough time for all the sequential processing of words and their combination into a larger representation,” she said. However, the research offers fresh insights, revealing we can detect certain sentence structures in as little as 125 milliseconds (ms) – a timeframe similar to the blink of an eye. Pylkkanen said: “We don’t yet know exactly how this ultrafast structure detection is possible, but the general hypothesis is that when something you perceive fits really well with what you know about – in this case, we’re talking about knowledge of the grammar – this top-down knowledge can help you identify the stimulus really fast. “So just like your own car is quickly identifiable in a parking lot, certain language structures are quickly identifiable and can then give rise to a rapid effect of syntax in the brain.” The team say the findings suggest parallels with the way in which we perceive visual scenes, with Pylkkanen noting the results could have practical uses for the designers of digital media, as well as advertisers and designers of road signs. Writing in the journal Science Advances, Pylkkanen and colleagues report how they used a non-invasive scanning device to measure the brain activity of 36 participants. © 2024 Guardian News & Media Limited
Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 29527 - Posted: 10.26.2024


.gif)

