Chapter 15. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2780

By Natalia Mesa A region of the cerebellum shows language specificity akin to that of cortical language regions, indicating that it might be part of the broader language network, according to a new brain-imaging study. “This is the first time we see an area outside of the core left-hemisphere language areas that behaves so similarly to those core areas,” says study investigator Ev Fedorenko, associate professor of brain and cognitive sciences at the Massachusetts Institute of Technology. Initially thought to coordinate only movement, the cerebellum also contributes to cognitive processes, such as social reward, abstract reasoning and working memory, according to studies from the past decade. But despite the fact that people with cerebellar lesions have subtle language struggles, the region’s contributions to that skill have been ignored until recently, Fedorenko says. With this new work, “I think it becomes harder to dismiss language responses as somehow artifactual.” Fedorenko and her team analyzed nearly 1,700 whole-brain functional MRI experiments conducted over the course of 15 years. They originally collected and analyzed those scans to identify language-selective regions of the neocortex, but they reanalyzed many of them to determine the cerebellum’s role in linguistic processing. Four cerebellar regions activated robustly when participants performed language-related tasks, such as reading passages of text or listening to someone else reading the passages aloud, in line with previous work. But only one region responded exclusively to these language-related tasks; it did not activate during a variety of nonlinguistic tasks—including movement, arithmetic tasks and a spatial working memory task—or when participants listened to music or watched videos of faces and bodies. The findings were published last month in Neuron. © 2026 Simons Foundation

Keyword: Language
Link ID: 30110 - Posted: 02.07.2026

By Laura Sanders The brain’s “little brain” may hold big promise for people with language trouble. Tucked into the base of the brain, the fist-sized cerebellum is most known for its role in movement, posture and coordination. A new study maps the language system in this out-of-the-way place. These results, published January 22 in Neuron, uncover a spot in the cerebellum that shows strong and selective activity for language. The new study is “excellent,” says neurologist and cerebellum researcher Jeremy Schmahmann of Massachusetts General Hospital and Harvard Medical School in Boston. His work and that of others have shown that the cerebellum contributes to language and thinking more generally. The new research scrutinized the cerebellum in detail, “confirming and extending previous observations and contributing to our understanding” of the cerebellum’s activity, he says. Neuroscientist Colton Casto combed through about 15 years of brain scanning data collected by study coauthor Evelina Fedorenko, a cognitive neuroscientist at MIT, and her colleagues. Putting the data all together, the scans of 846 people showed brain activity in four spots in the right side of the cerebellum as people read or listened to a story. Three of these spots were also active when people did other things, such as working out a math problem, or listening to music or watching a movie without words. But one spot was more discerning, says Casto, of MIT and Harvard University. This region didn’t respond to nonverbal movies or math. It also ignored orchestral or jazz music, which, like language, relies on syntax and patterns and sound. Instead, this spot is attuned specifically to words. “You have to be reading or listening to language to fully recruit this region,” Casto says. © Society for Science & the Public 2000–2026.

Keyword: Language
Link ID: 30091 - Posted: 01.24.2026

Nell Greenfieldboyce If you've ever had to spell out words like W-A-L-K or T-R-E-A-T around a dog, you know that some dogs listen in to humans' chitchat and can pick out certain key words. Well, it turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. Your dog is a good boy, but that's not necessarily because of its breed Animals Your dog is a good boy, but that's not necessarily because of its breed What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight — as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eötvös Loránd University and the University of Veterinary Medicine, Vienna. Sponsor Message "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities." Fetch the ball — or the frisbee? © 2026 npr

Keyword: Language; Evolution
Link ID: 30075 - Posted: 01.10.2026

Luiz Pessoa When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society. Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers. These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works. The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking. © Aeon Media Group Ltd. 2012-2026.

Keyword: Consciousness; Learning & Memory
Link ID: 30066 - Posted: 01.03.2026

By John Pavlus Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine. That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process. But what if our neurobiological reality includes a system that behaves something like an LLM? Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko (opens a new tab) began studying how language works in the adult human brain. The specialized system she has described, which she calls “the language network,” maps the correspondences between words and their meanings. Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains. “You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.” Fedorenko has been gathering biological evidence of this language network for the past 15 years in her lab at the Massachusetts Institute of Technology. Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess). Nor is the human language network particularly large: If all of its tissue were clumped together, it would be about the size of a strawberry (opens a new tab). But when it is damaged, the effect is profound. An injured language network can result in forms of aphasia (opens a new tab) in which sophisticated cognition remains intact but trapped within a brain unable to express it or distinguish incoming words from others. © 2025 Simons Foundation

Keyword: Language
Link ID: 30043 - Posted: 12.06.2025

Liam Drew Paradromics, a neurotechnology developer, announced today that the US Food and Drug Administration (FDA) has approved a first long-term clinical trial of its brain–computer interface (BCI). Early next year, the company — one of the closet rivals to Elon Musk’s neurotechnology firm Neuralink — will implant its device in two volunteers who were left unable to speak owing to neurological diseases and injuries. It has two goals: to ensure the device is safe; and to restore a person’s ability to communicate with real-time speech. “We’re very excited about bringing this new hardware into a trial,” says Matt Angle, chief executive of Paradromics, which is based in Austin, Texas. Paradromics’ BCI has an active area of roughly 7.5 millimetres in diameter of thin, stiff, platinum-iridium electrodes that penetrate the surface of the cerebral cortex to record from individual neurons around 1.5 mm deep. This is then connected by wire to a power source and wireless transceiver implanted in an individual’s chest. Initially, the two volunteers will each have one electrode array implanted in the area of the motor cortex that controls the lips, tongue and larynx, Angle says. Neural activity will then be recorded from this region as the study participants imagine speaking sentences that are presented to them. Following previous work by researchers who are now collaborating with Paradromics1, the system learns what patterns of neural activity correspond to each intended speech sound. When participants imagine speaking these neural patterns will be converted into text on a screen for participants to approve, or into a real-time voice output based on old recordings of participants’ own voices. This is the first BCI clinical trial to formally target synthetic-voice generation. “Arguably, the greatest quality of life change you can deliver right now with BCI is communication,” Angle says. © 2025 Springer Nature Limited

Keyword: Robotics
Link ID: 30019 - Posted: 11.22.2025

By Kate Graham-Shaw A long time ago in a galaxy far, far away, R2-D2 beeped and booped—and now birds that copy the Star Wars character are giving scientists fresh insight into how different species imitate complex sounds. A study, published recently in Scientific Reports, analyzed the sounds of nine species of parrots, including Budgies, as well as European Starlings to see how accurately each bird mimicked R2-D2’s robotic whirring. Researchers did acoustic analyses on samples of birds imitating the plucky droid that were already available online to compare how statistically similar each bird’s noises were to a model of R2-D2’s sounds. The starlings, a type of songbird, emerged as star vocalists: their ability to produce “multiphonic” noises—in their case, two different notes or tones expressed simultaneously—allowed them to replicate R2-D2’s complex chirps more accurately. Parrots and budgies, which only produce “monophonic” (or single-tone) noises, imitated the droid’s sounds with less accuracy and musicality. The differing abilities stem from physical variations in the birds’ “syrinx”—a unique vocal organ that sits at the base of the avian windpipe. “Starlings can produce two sounds at once because they control both sides of the syrinx independently,” says study co-author Nick Dam, an evolutionary biologist at Leiden University in the Netherlands. “Parrots are physically incapable of producing two tones simultaneously.” It isn’t exactly known why different species developed differing control over their syrinx. “Likely, some ancestor of songbirds happened to evolve the ability to control the muscles on both sides of the syrinx, and this helped them in some way,” says University of Northern Colorado biologist Lauryn Benedict, who wasn’t involved in the study but sometimes works with its authors. One of the leading explanations involves mating; the better at singing a male songbird is, the more females he attracts. © 2025 SCIENTIFIC AMERICAN,

Keyword: Animal Communication; Language
Link ID: 30017 - Posted: 11.19.2025

By Kathryn Hulick Dolphins whistle, humpback whales sing and sperm whales click. Now, a new analysis of sperm whale codas — a unique series of clicks — suggests a previously unrecognized acoustic pattern. The finding, reported November 12 in Open Mind, implies that the whales’ clicking communications might be more complex — and meaningful — than previously realized. But the study faces sharp criticism from marine biologists who argue that these patterns are more likely to be recording artifacts or by-products of alertness rather than language-like signals. For decades, biologists have known that both the number and timing of clicks in a coda matter and can even identify the clan of a sperm whale (Physeter macrocephalus). Sperm whales in the eastern Caribbean Sea off the coast of Dominica, for example, often use a series of two slow and three quick sounds: “click…click… click-click-click.” Relying on artificial intelligence and linguistics analysis, the new study finds that sometimes this series sounds more like “clack…clack… clack-clack-clack,” says Shane Gero, a marine biologist at Project CETI, a Dominica-based nonprofit studying sperm whale communication. Project CETI linguist Gašper Beguš wonders about the meanings a coda might convey. “It sounds really alien,” almost like Morse code, says Beguš, of the University of California, Berkeley. Based on his team’s result, he now speculates that sperm whales might use clicks or clacks “in a similar way as we use our vowels to transmit meaning.” Not everyone agrees with that assessment. The comparison to vowels is “completely nonsense,” says Luke Rendell, a marine biologist at the University of St. Andrews in Scotland who has studied sperm whales for more than 30 years. “There’s no evidence that the animals are responding in any way to this [new pattern].” © Society for Science & the Public 2000–2025

Keyword: Language; Animal Communication
Link ID: 30013 - Posted: 11.15.2025

Katie Kavanagh Speaking multiple languages could slow down brain ageing and help to prevent cognitive decline, a study of more than 80,000 people has found. The work, published in Nature Aging on 10 November1, suggests that people who are multilingual are half as likely to show signs of accelerated biological ageing as are those who speak just one language. “We wanted to address one of the most persistent gaps in ageing research, which is if multilingualism can actually delay ageing,” says study co-author Agustín Ibáñez, a neuroscientist at the Adolfo Ibáñez University in Santiago, Chile. Previous research in this area has suggested that speaking multiple languages can improve cognitive functions such memory and attention2, which boosts brain health as we get older. But many of these studies rely on small sample sizes and use unreliable methods of measuring ageing, which leads to results that are inconsistent and not generalizable. “The effects of multilingualism on ageing have always been controversial, but I don’t think there has been a study of this scale before, which seems to demonstrate them quite decisively,” says Christos Pliatsikas, a cognitive neuroscientist at the University of Reading, UK. The paper’s results could “bring a step change to the field”, he adds. They might also “encourage people to go out and try to learn a second language, or keep that second language active”, says Susan Teubner-Rhodes, a cognitive psychologist at Auburn University in Alabama. © 2025 Springer Nature Limited

Keyword: Language; Alzheimers
Link ID: 30005 - Posted: 11.12.2025

By Meghie Rodrigues Babies start processing language before they are born, a new study suggests. A research team in Montreal has found that newborns who had heard short stories in foreign languages while in the womb process those languages similarly to their native tongue. The study, published in August in Nature Communications Biology, is the first to use brain imaging to show what neuroscientists and psychologists had long suspected. Previous research had shown that fetuses and newborns can recognize familiar voices and rhythms and even that they prefer their native language soon after birth. But these findings come mostly from behavioral cues—sucking patterns, head turns or heart rate changes—rather than direct evidence from the brain. “We cannot say babies ‘learn’ a language prenatally,” says Anne Gallagher, a neuropsychologist at the University of Montreal and senior author of the study. What we can say, she adds, is that neonates develop familiarity with one or more languages during gestation, which shapes their brain networks at birth. The research team recruited 60 people for the experiment, all of them about 35 weeks into their pregnancy. Of those, 39 exposed their fetuses to 10 minutes of prerecorded stories in French (their native language) and another 10 minutes of the same stories in either Hebrew or German at least once every other day until birth. These languages were chosen because their acoustic and phonological properties are very distinctfrom French and from each other, explains co-lead author Andréanne René, a Ph.D. candidate in clinical neuropsychology at the University of Montreal. The other 21 participants were part of the control group; their fetuses were exposed to French in their natural environments, with no special input. © 2025 SCIENTIFIC AMERICAN

Keyword: Language; Development of the Brain
Link ID: 29959 - Posted: 10.08.2025

By Keith Schneider Jane Goodall, one of the world’s most revered conservationists, who earned scientific stature and global celebrity by chronicling the distinctive behavior of wild chimpanzees in East Africa — primates that made and used tools, ate meat, held rain dances and engaged in organized warfare — died on Wednesday in Los Angeles. She was 91. Her death, while on a speaking tour, was confirmed by the Jane Goodall Institute, whose U.S. headquarters are in Washington, D.C. When not traveling widely, she lived in Bournemouth, on the south coast of England, in her childhood home. Dr. Goodall was 29 in the summer of 1963 when National Geographic magazine published her 7,500-word, 37-page account of the lives of primates she had observed in the Gombe Stream Chimpanzee Reserve in what is now Tanzania. The National Geographic Society had been financially supporting her field studies there. The article, with photographs by Hugo van Lawick, a Dutch wildlife photographer whom she later married, also described Dr. Goodall’s struggles to overcome disease, predators and frustration as she tried to get close to the chimps, working from a primitive research station along the eastern shore of Lake Tanganyika. On the scientific merits alone, her discoveries about how wild chimpanzees raised their young, established leadership, socialized and communicated broke new ground and attracted immense attention and respect among researchers. Stephen Jay Gould, the evolutionary biologist and science historian, said her work with chimpanzees “represents one of the Western world’s great scientific achievements.” On learning of Dr. Goodall’s documented evidence that humans were not the only creatures capable of making and using tools, Louis Leakey, the paleoanthropologist and Dr. Goodall’s mentor, famously remarked, “Now we must redefine ‘tool,’ redefine ‘man,’ or accept chimpanzees as humans.” © 2025 The New York Times Company

Keyword: Evolution; Animal Communication
Link ID: 29953 - Posted: 10.04.2025

By Catherine Offord As the National Football League’s (NFL’s) latest season gets underway, so, too, does the conversation about the risk of serious brain damage to its athletes. Multiple well-publicized studies in recent years have linked repetitive head impacts typical in football and other contact sports to an increased likelihood of chronic traumatic encephalopathy (CTE), a neurodegenerative condition characterized by a buildup of misfolded proteins in the brain. Now, a leading CTE research group reports evidence that regular sports-related impacts could cause brain damage before the condition’s hallmark features appear. An analysis of postmortem brain tissue from athletes and nonathletes who died before their early 50s, published today in Nature, identifies multiple cellular differences between the groups, regardless of whether CTE was present. The findings support the idea that contact sports are associated with specific cellular changes in the brain. The study also “helps us understand, or at least ask new questions about, the mechanisms that bridge that acute exposure to later neurodegeneration,” says Gil Rabinovici, a neurologist and researcher at the University of California San Francisco who was not involved in the work. But not many brains were examined—fewer than 30 for most analyses. And the study doesn’t show that the neuron loss and other brain changes affect a person’s cognitive or mental health, cautions Colin Smith, a neuropathologist at the University of Edinburgh. “What does this mean clinically? … That is still the big question hanging here.” CTE recently hit the headlines again after a shooter killed four people and himself in the New York City building housing NFL’s headquarters this summer. In a note found by police, the former high school football player reportedly said he thought he had CTE, and asked that his brain be studied.

Keyword: Brain Injury/Concussion
Link ID: 29935 - Posted: 09.20.2025

Chris Simms A wearable device could make saying ‘Alexa, what time is it?’ aloud a thing of the past. An artificial intelligence (AI) neural interface called AlterEgo promises to allow users to silently communicate just by internally articulating words. Sitting over the ear, the device facilitates daily life through live communication with the Internet. “It gives you the power of telepathy but only for the thoughts you want to share,” says AlterEgo’s chief executive Arnav Kapur, based in Cambridge, Massachusetts. Kapur unveiled the device on 8 September. The device does not read brain activity, but predicts what a wearer wants to say from signals in muscles used to speak, then sends audio information back into their ear. The researchers say that their non-invasive technology could help people with motor neuron disease (amyotrophic lateral sclerosis; ALS) and multiple sclerosis (MS) who have trouble speaking, but also want to make the devices commercially available for general use. In a promotional video on the AlterEgo website, Kapur says that “it’s a revolutionary breakthrough with the potential to change the way we interact with our technology, with one another and with the world around us”. “The big question about this is ‘how likely is that potential to be realized?,” says Howard Chizeck, an electrical and computer engineer at the University of Washington in Seattle. Chizeck says that the technology seems workable and is less of a privacy risk than listening devices such as Amazon’s Alexa are, but isn’t convinced that the device will catch on for commercial use. © 2025 Springer Nature Limited

Keyword: Robotics; Language
Link ID: 29934 - Posted: 09.20.2025

Rachel Fieldhouse Deep in the rainforests of the Democratic Republic of the Congo, Mélissa Berthet found bonobos doing something thought to be uniquely human. During the six months that Berthet observed the primates, they combined calls in several ways to make complex phrases1. In one example, bonobos (Pan paniscus) that were building nests together added a yelp, meaning ‘let’s do this’, to a grunt that says ‘look at me’. “It’s really a way to say: ‘Look at what I’m doing, and let’s do this all together’,” says Berthet, who studies primates and linguistics at the University of Rennes, France. In another case, a peep that means ‘I would like to do this’ was followed by a whistle signalling ‘let’s stay together’. The bonobos combine the two calls in sensitive social contexts, says Berthet. “I think it’s to bring peace.” The study, reported in April, is one of several examples from the past few years that highlight just how sophisticated vocal communication in non-human animals can be. In some species of primate, whale2 and bird, researchers have identified features and patterns of vocalization that have long been considered defining characteristics of human language. These results challenge ideas about what makes human language special — and even how ‘language’ should be defined. Perhaps unsurprisingly, many scientists turn to artificial intelligence (AI) tools to speed up the detection and interpretation of animal sounds, and to probe aspects of communication that human listeners might miss. “It’s doing something that just wasn’t possible through traditional means,” says David Robinson, an AI researcher at the Earth Species Project, a non-profit organization based in Berkeley, California, that is developing AI systems to decode communication across the animal kingdom. As the research advances, there is increasing interest in using AI tools not only to listen in on animal speech, but also to potentially talk back. © 2025 Springer Nature Limited

Keyword: Animal Communication; Language
Link ID: 29931 - Posted: 09.17.2025

By Jake Buehler All eight arms of an octopus can be used for whatever their cephalopod owner wishes, but some arms are favored for certain tasks. A new, detailed analysis of how octopuses wield their famously flexible appendages suggests that all eight arms share a skill set, but the front four spend more time on exploration and the back four on movement. The findings, published September 11 in Scientific Reports, provide a comprehensive accounting of how subtle arm movements coordinate the clever invertebrates’ repertoire of behaviors. Octopuses live their lives through their sucker-lined arms, which make up the bulk of their body mass and contain most of their nervous system. Marine biologist Chelsea Bennice wanted to understand how octopuses use the extreme flexibility of their boneless limbs to move, hunt and investigate their environment. Her colleagues had examined some of these behaviors in laboratory settings, but not in the wild. Bennice and her colleagues watched 25 videos, filmed from 2007 to 2015, of multiple species of wild octopuses in Spain and the Caribbean, cataloging their behaviors and arm movements. In all, the researchers logged nearly 4,000 arm actions, which could be broken down into 12 types, including raising, reaching and grasping. The arms could deform in four distinct ways: elongating, shortening, bending and twisting. The team found that the octopuses were exceptionally ambidextrous. “Octopuses are ultimate multitaskers,” says Bennice, of Florida Atlantic University in Boca Raton. “All arms are capable of all arm behaviors and all arm deformations. They can even use multiple arm actions on a single arm and on several arms at the same time.” © Society for Science & the Public 2000–2025.

Keyword: Laterality; Evolution
Link ID: 29926 - Posted: 09.13.2025

By Rachel E. Gross The first thing Debra McVean did when she woke up at the hospital in March 2024 was try to get to the bathroom. But her left arm wouldn’t move; neither would her left leg. She was paralyzed all along her left side. She had suffered a stroke, her doctor soon explained. A few nights before, a blood clot had lodged in an artery in her neck, choking off oxygen to her brain cells. Now an M.R.I. showed a dark spot in her brain, an eerie absence directly behind her right eye. What that meant for her prognosis, however, the doctor couldn’t say. “Something’s missing there, but you don’t know what,” Ms. McVean’s husband, Ian, recalled recently. “And you don’t know how that will affect her recovery. It’s that uncertainty, it eats away at you.” With a brain injury, unlike a broken bone, there is no clear road to recovery. Nor are there medical tools or therapies to help guide the brain toward healing. All doctors can do is encourage patients to work hard in rehab, and hope. That is why, for decades, the medical attitude toward survivors of brain injury has been largely one of neurological “nihilism,” said Dr. Fernando Testai, a neurologist at the University of Illinois, Chicago, and the editor in chief of the Journal of Stroke and Cerebrovascular Diseases. Stroke, he said, “was often seen as a disease of ‘diagnose and adios.’” That may be about to change. A few days after Ms. McVean woke up in the Foothills Medical Center in Calgary, she was told about a clinical trial for a pill that could help the brain recover from a stroke or traumatic injury, called Maraviroc. Given her level of physical disability, she was a good candidate for the study. She hesitated. The pills were large — horse pills, she called them. But she knew the study could help others, and there was a 50 percent chance that she would get a drug that could help her, too. © 2025 The New York Times Company

Keyword: Stroke; Regeneration
Link ID: 29921 - Posted: 09.06.2025

By Marta Hill Most people flinch when a rat scurries into their path, but not one New York City-based research team: These researchers actively seek out urban rats to study their day-to-day behaviors and interactions. The work is part of a growing trend of neuroscientists studying animals in their natural environments rather than in the lab. “It’s a classic neuroscience model organism, but we don’t really know that much about their natural ecology,” says team member Emily Mackevicius, senior research scientist at Basis Research Institute. The fact that urban rats are ubiquitous presents a convenient opportunity for naturalistic study, adds Ralph Peterson, a postdoctoral fellow at the institute, who is also part of the team. Last year, Peterson, Mackevicius and their colleagues held a series of rat behavior stakeouts around New York City—in the Union Square subway station, in a wooded area of Central Park and on a street corner in Harlem. The team used thermal cameras to track the animals as they foraged in the dark and ultrasonic audio recorders to eavesdrop on rat vocalizations. Rats in the wild vocalize differently than laboratory rats, the team found. For example, lab rats typically emit calls at 22 kilohertz in negative contexts, such as when they sense danger, according to a 2021 review article. By contrast, the city rats used that frequency across more varied scenarios, including while they were foraging. The team posted their results on bioRxiv last month. “This creature that we see out at night all the time, running around, is actually vocalizing all the while, and we can’t hear it,” Peterson says. © 2025 Simons Foundation

Keyword: Animal Communication; Evolution
Link ID: 29893 - Posted: 08.20.2025

By Carl Zimmer For decades, neuroengineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or A.L.S., weakens the muscles in the airway. A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could instead record the brain’s electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published on Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. “It’s a fantastic advance,” Dr. Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. In 2023, after A.L.S. had made his voice unintelligible, Mr. Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. A computer recorded the electrical activity from the implants as Mr. Harrell attempted to say different words. Over time, with the help of artificial intelligence, the computer accurately predicted almost 6,000 words, with an accuracy of 97.5 percent. It could then synthesize those words using Mr. Harrell’s voice, based on recordings made before he developed A.L.S. © 2025 The New York Times Company

Keyword: Language; Robotics
Link ID: 29892 - Posted: 08.16.2025

James Doubek Researchers have some new evidence about what makes birds make so much noise early in the morning, and it's not for some of the reasons they previously thought. For decades, a dominant theory about why birds sing at dawn — called the "dawn chorus" — has been that they can be heard farther and more clearly at that time. Sound travels faster in humid air and it's more humid early in the morning. It's less windy, too, which is thought to lessen any distortion of their vocalizations. But scientists from the Cornell Lab of Ornithology's K. Lisa Yang Center for Conservation Bioacoustics and Project Dhvani in India combed through audio recordings of birds in the rainforest. They say they didn't find evidence to back up this "acoustic transmission hypothesis." It was among the hypotheses involving environmental factors. Another is that birds spend their time singing at dawn because there's low light and it's a bad time to look for food. "We basically didn't find much support for some of these environmental cues which have been purported in literature as hypotheses" for why birds sing more at dawn, says Vijay Ramesh, a postdoctoral research associate at Cornell and the study's lead author. The study, called "Why is the early bird early? An evaluation of hypotheses for avian dawn-biased vocal activity," was published this month in the peer-reviewed journal Philosophical Transactions of the Royal Society B. The researchers didn't definitively point to one reason for why the dawn chorus is happening, but they found support for ideas that the early morning racket relates to birds marking their territory after being inactive at night, and communicating about finding food. © 2025 npr

Keyword: Animal Communication; Evolution
Link ID: 29839 - Posted: 06.21.2025

Associated Press Prairie dogs bark to alert each other to the presence of predators, with different cries depending on whether the threat is airborne or approaching by land. But their warnings also seem to help a vulnerable grassland bird. Curlews have figured out that if they eavesdrop on alarms from US prairie dog colonies they may get a jump on predators coming for them, too, according to research published on Thursday in the journal Animal Behavior. “Prairie dogs are on the menu for just about every predator you can think of – golden eagles, red-tailed hawks, foxes, badgers, even large snakes,” said Andy Boyce, a research ecologist in Montana at the Smithsonian’s National Zoo and Conservation Biology Institute. Such animals also gladly snack on grassland nesting birds such as the long-billed curlew, so the birds have adapted. Previous research has shown birds frequently eavesdrop on other bird species to glean information about food sources or danger, said Georgetown University ornithologist Emily Williams, who was not involved in the study. But, so far, scientists have documented only a few instances of birds eavesdropping on mammals. “That doesn’t necessarily mean it’s rare in the wild,” she said, “it just means we haven’t studied it yet.” Prairie dogs, a type of ground squirrel, live in large colonies with a series of burrows that may stretch for miles underground, especially on the vast US plains. When they hear each other’s barks, they either stand alert watching or dive into their burrows. “Those little barks are very loud; they can carry quite a long way,” said research co-author Andrew Dreelin, who also works for the Smithsonian. © 2025 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 29832 - Posted: 06.18.2025