Chapter 19. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2786

Nicholas Humphrey In his novel Penguin Island (1908), Anatole France spins a wonderful tale about a blind old monk who sets off from Brittany on a mission to the Hebrides and lands on an island inhabited only by penguins. Though the birds speak a strange language, he assumes they must be human beings. So he proceeds to baptise them. When the news of this reaches heaven, it causes a major stir. God himself is embarrassed. He gathers an assembly of clerics and doctors, and asks them for an opinion on the delicate question of whether the birds must now be given souls. It is a matter of more than theoretical importance. ‘The Christian state,’ St Cornelius points out, ‘is not without serious inconveniences for a penguin … The habits of birds are, in many points, contrary to the commandments of the Church …’ After lengthy discussion, they settle on a compromise. The baptised penguins are indeed to be granted souls – but, on St Catherine’s recommendation, their souls are to be of small size. For the penguins, souls were an unexpected bonus. As René Descartes, the philosopher-scientist of the 17th century, had explained, nonhuman animals in general, in a state of nature, are mere soulless machines. Here’s a sketch of a Cartesian penguin, without even a smidgen of a soul. Descartes believed that humans too are machines of a kind. But he held that, with humans, thankfully, God has arranged the addition of a soul as standard practice. Early in infancy, the material substance of the human brain is put into communication via the pineal gland with the separate substance of the mind: res extensa (extended stuff) is joined by res cogitans (thinking stuff). The consciousness that results lays the foundation for the soul. © Aeon Media Group Ltd. 2012-2026

Keyword: Consciousness; Language
Link ID: 30204 - Posted: 04.18.2026

Oliver Milman We may appear to have little in common with sperm whales – enormous, ocean-dwelling animals that last shared a common ancestor with humans more than 90 million years ago. But the whales’ vocalized communications are remarkably similar to our own, researchers have discovered. Not only do sperm whale have a form of “alphabet” and form vowels within their vocalizations but the structure of these vowels behaves in the same way as human speech, the new study has found. Sperm whales communicate in a series of short clicks called codas. Analysis of these clicks shows that the whales can differentiate vowels through the short or elongated clicks or through rising or falling tones, using patterns similar to languages such as Mandarin, Latin and Slovenian. The structure of the whales’ communication has “close parallels in the phonetics and phonology of human languages, suggesting independent evolution”, the paper, published in the Proceedings B journal, states. Sperm whale coda vocalizations are “highly complex and represent one of the closest parallels to human phonology of any analyzed animal communication system”, it added. The findings are the latest discovery about the lives of sperm whales by Project Ceti (standing for Cetacean Translation Initiative), an organization that has studied whales off the coast of Dominica in an attempt to find out what they are saying. Last month, the project released video of a sperm whale giving birth while other whales supported it. © 2026 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 30201 - Posted: 04.15.2026

Lynne Peeples In 2021, dermatologist David Ozog was on holiday with his family in the Bahamas, when his 18-year-old son had a massive stroke. The teenager was airlifted to Florida, and then to Chicago for surgery. As his son was lying partially paralysed in a hospital bed, Ozog got a call from a colleague who had an unconventional suggestion. The colleague, a dermatologist at Harvard Medical School in Boston, Massachusetts, told Ozog about research he was conducting with the US Department of Defense. Early results hinted that red and near-infrared light applied to the head might protect neural tissue after brain injury. He urged Ozog to consider trying it on his son. Ozog stayed up until 4 a.m. that night reading scientific papers and, ultimately, ordering several panels made of red and near-infrared light-emitting diodes (LEDs). “I started sneaking them into the hospital,” says Ozog, who works at Henry Ford Health in Grand Rapids, Michigan. Today, his son is walking and back in university. Ozog cannot prove that light therapy made a difference, but he thinks that it helped. He has since become a convert to an idea that, at the time, was considered fringe. “I thought the same thing,” he says, “How could shining this thing on you possibly have any biologic effect?” But what was at the margins of medicine just a few years ago is now edging towards the mainstream. Red-light devices are increasingly appearing in dermatology offices, wellness centres, locker rooms and homes. According to some projections, the global market will surpass US$1 billion by 2030, propelled by a surge of companies promising benefits for everything from ageing skin to attention deficit hyperactivity disorder (ADHD) — claims echoed widely across social media. Experts warn that there is considerable hype about red-light therapy. But a growing body of legitimate science has been exploring the benefits for several conditions. Clinical studies have reported improvements in peripheral neuropathy1, retinal degeneration2 and certain neurological disorders3. For some indications, expert groups now recommend red-light regimens1. Researchers are also uncovering how red and near-infrared light might exert these effects. Mitochondria — the power plants of the cell — are emerging as a central piece of the puzzle. © 2026 Springer Nature Limited

Keyword: Stroke; Parkinsons
Link ID: 30182 - Posted: 03.28.2026

Max Kozlov For decades, scientists have struggled to understand exactly how years of taking hits to the head while playing sports can translate into severe memory loss and dementia later in life. Now, a study1 published today in Science Translational Medicine reveals that the protective shield known as the blood–brain barrier can be damaged and leaky decades after an athlete retires from sport. This persistent leakiness seems to trigger a long-lasting immune response that is closely tied to cognitive decline, the study finds. The work is a “very important study that finds the disruption of the blood–brain barrier many years after head trauma”, says Katerina Akassoglou, a neuroimmunologist at the Gladstone Institutes in San Francisco, California, who was not involved in the research. Part of the difficulty in studying the long-term effects of head trauma is that some neurodegenerative conditions, such as chronic traumatic encephalopathy (CTE), can be diagnosed only by examining neuronal tissue after death, says Matthew Campbell, a specialist in neurovascular genetics at Trinity College Dublin, who co-authored the paper. Campbell and his colleagues wanted to see whether they could spot warning signs in living athletes by looking at the blood–brain barrier, a dense layer of cells lining the blood vessels that supply the brain. This layer usually keeps harmful substances from leaking out of the blood and into brain tissue. To investigate, the researchers scanned the brains of 47 athletes who had retired from playing contact sports with a high risk of concussion and repetitive head impact, such as rugby and boxing. They also examined a control group of non-athletes and athletes who had played non-contact sports. © 2026 Springer Nature Limited

Keyword: Brain Injury/Concussion
Link ID: 30170 - Posted: 03.21.2026

Carlo Iacono Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span. This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance. I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong. The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it. © Aeon Media Group Ltd. 2012-2026.

Keyword: Attention; Language
Link ID: 30132 - Posted: 02.21.2026

By Meghan Bartels On March 7, 1949, researchers at the Woods Hole Oceanographic Institution (WHOI) were stationed on a boat called the R/V Atlantis that was sailing off the coast of Bermuda. They lowered a primitive underwater recording setup into the ocean, and a boxy machine more regularly found in offices began etching the sounds of the sea—a chorus of eerie howls and rustling waves—into a thin plastic disk. That disk made its way to WHOI’s archives in Massachusetts, where it sat, an overlooked relic of the earliest days of underwater acoustic recording. Fast-forward nearly eight decades, and experts at WHOI have rediscovered the recording and determined it’s probably the oldest whale recording still in existence. The likely vocalist? A humpback whale (Megaptera novaeangliae). The scientists who stumbled on the rare recording are eager to use it for science. “Data from this time period simply don’t exist in most cases,” said Laela Sayigh, a marine bioacoustician at WHOI, in a statement. “This recording can provide insight into how humpback whale sounds have changed over time, as well as serving as a baseline for measuring how human activity shapes the ocean soundscape.” The recording dates to a time when the North Atlantic Ocean’s humpback whales were struggling because of decades of commercial whaling. By 1955, the population had likely fallen below 1,000 animals, experts have since estimated. And although humpback whales are due for a thorough census, even outdated estimates suggest there are at least 20 to 25 times the number of these animals in the region today. © 2025 SCIENTIFIC AMERICAN,

Keyword: Animal Communication; Language
Link ID: 30131 - Posted: 02.21.2026

By Natalia Mesa A region of the cerebellum shows language specificity akin to that of cortical language regions, indicating that it might be part of the broader language network, according to a new brain-imaging study. “This is the first time we see an area outside of the core left-hemisphere language areas that behaves so similarly to those core areas,” says study investigator Ev Fedorenko, associate professor of brain and cognitive sciences at the Massachusetts Institute of Technology. Initially thought to coordinate only movement, the cerebellum also contributes to cognitive processes, such as social reward, abstract reasoning and working memory, according to studies from the past decade. But despite the fact that people with cerebellar lesions have subtle language struggles, the region’s contributions to that skill have been ignored until recently, Fedorenko says. With this new work, “I think it becomes harder to dismiss language responses as somehow artifactual.” Fedorenko and her team analyzed nearly 1,700 whole-brain functional MRI experiments conducted over the course of 15 years. They originally collected and analyzed those scans to identify language-selective regions of the neocortex, but they reanalyzed many of them to determine the cerebellum’s role in linguistic processing. Four cerebellar regions activated robustly when participants performed language-related tasks, such as reading passages of text or listening to someone else reading the passages aloud, in line with previous work. But only one region responded exclusively to these language-related tasks; it did not activate during a variety of nonlinguistic tasks—including movement, arithmetic tasks and a spatial working memory task—or when participants listened to music or watched videos of faces and bodies. The findings were published last month in Neuron. © 2026 Simons Foundation

Keyword: Language
Link ID: 30110 - Posted: 02.07.2026

By Laura Sanders The brain’s “little brain” may hold big promise for people with language trouble. Tucked into the base of the brain, the fist-sized cerebellum is most known for its role in movement, posture and coordination. A new study maps the language system in this out-of-the-way place. These results, published January 22 in Neuron, uncover a spot in the cerebellum that shows strong and selective activity for language. The new study is “excellent,” says neurologist and cerebellum researcher Jeremy Schmahmann of Massachusetts General Hospital and Harvard Medical School in Boston. His work and that of others have shown that the cerebellum contributes to language and thinking more generally. The new research scrutinized the cerebellum in detail, “confirming and extending previous observations and contributing to our understanding” of the cerebellum’s activity, he says. Neuroscientist Colton Casto combed through about 15 years of brain scanning data collected by study coauthor Evelina Fedorenko, a cognitive neuroscientist at MIT, and her colleagues. Putting the data all together, the scans of 846 people showed brain activity in four spots in the right side of the cerebellum as people read or listened to a story. Three of these spots were also active when people did other things, such as working out a math problem, or listening to music or watching a movie without words. But one spot was more discerning, says Casto, of MIT and Harvard University. This region didn’t respond to nonverbal movies or math. It also ignored orchestral or jazz music, which, like language, relies on syntax and patterns and sound. Instead, this spot is attuned specifically to words. “You have to be reading or listening to language to fully recruit this region,” Casto says. © Society for Science & the Public 2000–2026.

Keyword: Language
Link ID: 30091 - Posted: 01.24.2026

Nell Greenfieldboyce If you've ever had to spell out words like W-A-L-K or T-R-E-A-T around a dog, you know that some dogs listen in to humans' chitchat and can pick out certain key words. Well, it turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. Your dog is a good boy, but that's not necessarily because of its breed Animals Your dog is a good boy, but that's not necessarily because of its breed What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight — as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eötvös Loránd University and the University of Veterinary Medicine, Vienna. Sponsor Message "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities." Fetch the ball — or the frisbee? © 2026 npr

Keyword: Language; Evolution
Link ID: 30075 - Posted: 01.10.2026

Luiz Pessoa When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence – where sophisticated behaviours arise not from central control but from the interactions themselves – appears across nature and human society. Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ‘right’ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that ‘LOL’ should enter common usage or that the meaning of ‘cool’ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers. These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our era’s most elusive mysteries: how the brain works. The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, we’re witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking. © Aeon Media Group Ltd. 2012-2026.

Keyword: Consciousness; Learning & Memory
Link ID: 30066 - Posted: 01.03.2026

By John Pavlus Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine. That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process. But what if our neurobiological reality includes a system that behaves something like an LLM? Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko (opens a new tab) began studying how language works in the adult human brain. The specialized system she has described, which she calls “the language network,” maps the correspondences between words and their meanings. Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains. “You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.” Fedorenko has been gathering biological evidence of this language network for the past 15 years in her lab at the Massachusetts Institute of Technology. Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess). Nor is the human language network particularly large: If all of its tissue were clumped together, it would be about the size of a strawberry (opens a new tab). But when it is damaged, the effect is profound. An injured language network can result in forms of aphasia (opens a new tab) in which sophisticated cognition remains intact but trapped within a brain unable to express it or distinguish incoming words from others. © 2025 Simons Foundation

Keyword: Language
Link ID: 30043 - Posted: 12.06.2025

Liam Drew Paradromics, a neurotechnology developer, announced today that the US Food and Drug Administration (FDA) has approved a first long-term clinical trial of its brain–computer interface (BCI). Early next year, the company — one of the closet rivals to Elon Musk’s neurotechnology firm Neuralink — will implant its device in two volunteers who were left unable to speak owing to neurological diseases and injuries. It has two goals: to ensure the device is safe; and to restore a person’s ability to communicate with real-time speech. “We’re very excited about bringing this new hardware into a trial,” says Matt Angle, chief executive of Paradromics, which is based in Austin, Texas. Paradromics’ BCI has an active area of roughly 7.5 millimetres in diameter of thin, stiff, platinum-iridium electrodes that penetrate the surface of the cerebral cortex to record from individual neurons around 1.5 mm deep. This is then connected by wire to a power source and wireless transceiver implanted in an individual’s chest. Initially, the two volunteers will each have one electrode array implanted in the area of the motor cortex that controls the lips, tongue and larynx, Angle says. Neural activity will then be recorded from this region as the study participants imagine speaking sentences that are presented to them. Following previous work by researchers who are now collaborating with Paradromics1, the system learns what patterns of neural activity correspond to each intended speech sound. When participants imagine speaking these neural patterns will be converted into text on a screen for participants to approve, or into a real-time voice output based on old recordings of participants’ own voices. This is the first BCI clinical trial to formally target synthetic-voice generation. “Arguably, the greatest quality of life change you can deliver right now with BCI is communication,” Angle says. © 2025 Springer Nature Limited

Keyword: Robotics
Link ID: 30019 - Posted: 11.22.2025

By Kate Graham-Shaw A long time ago in a galaxy far, far away, R2-D2 beeped and booped—and now birds that copy the Star Wars character are giving scientists fresh insight into how different species imitate complex sounds. A study, published recently in Scientific Reports, analyzed the sounds of nine species of parrots, including Budgies, as well as European Starlings to see how accurately each bird mimicked R2-D2’s robotic whirring. Researchers did acoustic analyses on samples of birds imitating the plucky droid that were already available online to compare how statistically similar each bird’s noises were to a model of R2-D2’s sounds. The starlings, a type of songbird, emerged as star vocalists: their ability to produce “multiphonic” noises—in their case, two different notes or tones expressed simultaneously—allowed them to replicate R2-D2’s complex chirps more accurately. Parrots and budgies, which only produce “monophonic” (or single-tone) noises, imitated the droid’s sounds with less accuracy and musicality. The differing abilities stem from physical variations in the birds’ “syrinx”—a unique vocal organ that sits at the base of the avian windpipe. “Starlings can produce two sounds at once because they control both sides of the syrinx independently,” says study co-author Nick Dam, an evolutionary biologist at Leiden University in the Netherlands. “Parrots are physically incapable of producing two tones simultaneously.” It isn’t exactly known why different species developed differing control over their syrinx. “Likely, some ancestor of songbirds happened to evolve the ability to control the muscles on both sides of the syrinx, and this helped them in some way,” says University of Northern Colorado biologist Lauryn Benedict, who wasn’t involved in the study but sometimes works with its authors. One of the leading explanations involves mating; the better at singing a male songbird is, the more females he attracts. © 2025 SCIENTIFIC AMERICAN,

Keyword: Animal Communication; Language
Link ID: 30017 - Posted: 11.19.2025

By Kathryn Hulick Dolphins whistle, humpback whales sing and sperm whales click. Now, a new analysis of sperm whale codas — a unique series of clicks — suggests a previously unrecognized acoustic pattern. The finding, reported November 12 in Open Mind, implies that the whales’ clicking communications might be more complex — and meaningful — than previously realized. But the study faces sharp criticism from marine biologists who argue that these patterns are more likely to be recording artifacts or by-products of alertness rather than language-like signals. For decades, biologists have known that both the number and timing of clicks in a coda matter and can even identify the clan of a sperm whale (Physeter macrocephalus). Sperm whales in the eastern Caribbean Sea off the coast of Dominica, for example, often use a series of two slow and three quick sounds: “click…click… click-click-click.” Relying on artificial intelligence and linguistics analysis, the new study finds that sometimes this series sounds more like “clack…clack… clack-clack-clack,” says Shane Gero, a marine biologist at Project CETI, a Dominica-based nonprofit studying sperm whale communication. Project CETI linguist Gašper Beguš wonders about the meanings a coda might convey. “It sounds really alien,” almost like Morse code, says Beguš, of the University of California, Berkeley. Based on his team’s result, he now speculates that sperm whales might use clicks or clacks “in a similar way as we use our vowels to transmit meaning.” Not everyone agrees with that assessment. The comparison to vowels is “completely nonsense,” says Luke Rendell, a marine biologist at the University of St. Andrews in Scotland who has studied sperm whales for more than 30 years. “There’s no evidence that the animals are responding in any way to this [new pattern].” © Society for Science & the Public 2000–2025

Keyword: Language; Animal Communication
Link ID: 30013 - Posted: 11.15.2025

Katie Kavanagh Speaking multiple languages could slow down brain ageing and help to prevent cognitive decline, a study of more than 80,000 people has found. The work, published in Nature Aging on 10 November1, suggests that people who are multilingual are half as likely to show signs of accelerated biological ageing as are those who speak just one language. “We wanted to address one of the most persistent gaps in ageing research, which is if multilingualism can actually delay ageing,” says study co-author Agustín Ibáñez, a neuroscientist at the Adolfo Ibáñez University in Santiago, Chile. Previous research in this area has suggested that speaking multiple languages can improve cognitive functions such memory and attention2, which boosts brain health as we get older. But many of these studies rely on small sample sizes and use unreliable methods of measuring ageing, which leads to results that are inconsistent and not generalizable. “The effects of multilingualism on ageing have always been controversial, but I don’t think there has been a study of this scale before, which seems to demonstrate them quite decisively,” says Christos Pliatsikas, a cognitive neuroscientist at the University of Reading, UK. The paper’s results could “bring a step change to the field”, he adds. They might also “encourage people to go out and try to learn a second language, or keep that second language active”, says Susan Teubner-Rhodes, a cognitive psychologist at Auburn University in Alabama. © 2025 Springer Nature Limited

Keyword: Language; Alzheimers
Link ID: 30005 - Posted: 11.12.2025

By Meghie Rodrigues Babies start processing language before they are born, a new study suggests. A research team in Montreal has found that newborns who had heard short stories in foreign languages while in the womb process those languages similarly to their native tongue. The study, published in August in Nature Communications Biology, is the first to use brain imaging to show what neuroscientists and psychologists had long suspected. Previous research had shown that fetuses and newborns can recognize familiar voices and rhythms and even that they prefer their native language soon after birth. But these findings come mostly from behavioral cues—sucking patterns, head turns or heart rate changes—rather than direct evidence from the brain. “We cannot say babies ‘learn’ a language prenatally,” says Anne Gallagher, a neuropsychologist at the University of Montreal and senior author of the study. What we can say, she adds, is that neonates develop familiarity with one or more languages during gestation, which shapes their brain networks at birth. The research team recruited 60 people for the experiment, all of them about 35 weeks into their pregnancy. Of those, 39 exposed their fetuses to 10 minutes of prerecorded stories in French (their native language) and another 10 minutes of the same stories in either Hebrew or German at least once every other day until birth. These languages were chosen because their acoustic and phonological properties are very distinctfrom French and from each other, explains co-lead author Andréanne René, a Ph.D. candidate in clinical neuropsychology at the University of Montreal. The other 21 participants were part of the control group; their fetuses were exposed to French in their natural environments, with no special input. © 2025 SCIENTIFIC AMERICAN

Keyword: Language; Development of the Brain
Link ID: 29959 - Posted: 10.08.2025

By Keith Schneider Jane Goodall, one of the world’s most revered conservationists, who earned scientific stature and global celebrity by chronicling the distinctive behavior of wild chimpanzees in East Africa — primates that made and used tools, ate meat, held rain dances and engaged in organized warfare — died on Wednesday in Los Angeles. She was 91. Her death, while on a speaking tour, was confirmed by the Jane Goodall Institute, whose U.S. headquarters are in Washington, D.C. When not traveling widely, she lived in Bournemouth, on the south coast of England, in her childhood home. Dr. Goodall was 29 in the summer of 1963 when National Geographic magazine published her 7,500-word, 37-page account of the lives of primates she had observed in the Gombe Stream Chimpanzee Reserve in what is now Tanzania. The National Geographic Society had been financially supporting her field studies there. The article, with photographs by Hugo van Lawick, a Dutch wildlife photographer whom she later married, also described Dr. Goodall’s struggles to overcome disease, predators and frustration as she tried to get close to the chimps, working from a primitive research station along the eastern shore of Lake Tanganyika. On the scientific merits alone, her discoveries about how wild chimpanzees raised their young, established leadership, socialized and communicated broke new ground and attracted immense attention and respect among researchers. Stephen Jay Gould, the evolutionary biologist and science historian, said her work with chimpanzees “represents one of the Western world’s great scientific achievements.” On learning of Dr. Goodall’s documented evidence that humans were not the only creatures capable of making and using tools, Louis Leakey, the paleoanthropologist and Dr. Goodall’s mentor, famously remarked, “Now we must redefine ‘tool,’ redefine ‘man,’ or accept chimpanzees as humans.” © 2025 The New York Times Company

Keyword: Evolution; Animal Communication
Link ID: 29953 - Posted: 10.04.2025

By Catherine Offord As the National Football League’s (NFL’s) latest season gets underway, so, too, does the conversation about the risk of serious brain damage to its athletes. Multiple well-publicized studies in recent years have linked repetitive head impacts typical in football and other contact sports to an increased likelihood of chronic traumatic encephalopathy (CTE), a neurodegenerative condition characterized by a buildup of misfolded proteins in the brain. Now, a leading CTE research group reports evidence that regular sports-related impacts could cause brain damage before the condition’s hallmark features appear. An analysis of postmortem brain tissue from athletes and nonathletes who died before their early 50s, published today in Nature, identifies multiple cellular differences between the groups, regardless of whether CTE was present. The findings support the idea that contact sports are associated with specific cellular changes in the brain. The study also “helps us understand, or at least ask new questions about, the mechanisms that bridge that acute exposure to later neurodegeneration,” says Gil Rabinovici, a neurologist and researcher at the University of California San Francisco who was not involved in the work. But not many brains were examined—fewer than 30 for most analyses. And the study doesn’t show that the neuron loss and other brain changes affect a person’s cognitive or mental health, cautions Colin Smith, a neuropathologist at the University of Edinburgh. “What does this mean clinically? … That is still the big question hanging here.” CTE recently hit the headlines again after a shooter killed four people and himself in the New York City building housing NFL’s headquarters this summer. In a note found by police, the former high school football player reportedly said he thought he had CTE, and asked that his brain be studied.

Keyword: Brain Injury/Concussion
Link ID: 29935 - Posted: 09.20.2025

Chris Simms A wearable device could make saying ‘Alexa, what time is it?’ aloud a thing of the past. An artificial intelligence (AI) neural interface called AlterEgo promises to allow users to silently communicate just by internally articulating words. Sitting over the ear, the device facilitates daily life through live communication with the Internet. “It gives you the power of telepathy but only for the thoughts you want to share,” says AlterEgo’s chief executive Arnav Kapur, based in Cambridge, Massachusetts. Kapur unveiled the device on 8 September. The device does not read brain activity, but predicts what a wearer wants to say from signals in muscles used to speak, then sends audio information back into their ear. The researchers say that their non-invasive technology could help people with motor neuron disease (amyotrophic lateral sclerosis; ALS) and multiple sclerosis (MS) who have trouble speaking, but also want to make the devices commercially available for general use. In a promotional video on the AlterEgo website, Kapur says that “it’s a revolutionary breakthrough with the potential to change the way we interact with our technology, with one another and with the world around us”. “The big question about this is ‘how likely is that potential to be realized?,” says Howard Chizeck, an electrical and computer engineer at the University of Washington in Seattle. Chizeck says that the technology seems workable and is less of a privacy risk than listening devices such as Amazon’s Alexa are, but isn’t convinced that the device will catch on for commercial use. © 2025 Springer Nature Limited

Keyword: Robotics; Language
Link ID: 29934 - Posted: 09.20.2025

Rachel Fieldhouse Deep in the rainforests of the Democratic Republic of the Congo, Mélissa Berthet found bonobos doing something thought to be uniquely human. During the six months that Berthet observed the primates, they combined calls in several ways to make complex phrases1. In one example, bonobos (Pan paniscus) that were building nests together added a yelp, meaning ‘let’s do this’, to a grunt that says ‘look at me’. “It’s really a way to say: ‘Look at what I’m doing, and let’s do this all together’,” says Berthet, who studies primates and linguistics at the University of Rennes, France. In another case, a peep that means ‘I would like to do this’ was followed by a whistle signalling ‘let’s stay together’. The bonobos combine the two calls in sensitive social contexts, says Berthet. “I think it’s to bring peace.” The study, reported in April, is one of several examples from the past few years that highlight just how sophisticated vocal communication in non-human animals can be. In some species of primate, whale2 and bird, researchers have identified features and patterns of vocalization that have long been considered defining characteristics of human language. These results challenge ideas about what makes human language special — and even how ‘language’ should be defined. Perhaps unsurprisingly, many scientists turn to artificial intelligence (AI) tools to speed up the detection and interpretation of animal sounds, and to probe aspects of communication that human listeners might miss. “It’s doing something that just wasn’t possible through traditional means,” says David Robinson, an AI researcher at the Earth Species Project, a non-profit organization based in Berkeley, California, that is developing AI systems to decode communication across the animal kingdom. As the research advances, there is increasing interest in using AI tools not only to listen in on animal speech, but also to potentially talk back. © 2025 Springer Nature Limited

Keyword: Animal Communication; Language
Link ID: 29931 - Posted: 09.17.2025