Chapter 15. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 2718

By Cathleen O’Grady Human conversations are rapid-fire affairs, with mere milliseconds passing between one person’s utterance and their partner’s response. This speedy turn taking is universal across cultures—but now it turns out that chimpanzees do it, too. By analyzing thousands of gestures from chimpanzees in five different communities in East Africa, researchers found that the animals take turns while communicating, and do so as quickly as we do. The speedy gestural conversations are also seen across chimp communities, just like in humans, the authors report today in Current Biology. The finding is “very exciting” says Maël Leroux, an evolutionary biologist at the University of Rennes who was not involved with the work. “Language is the hallmark of our species … and a central feature of language is our ability to take turns.” Finding a similar behavior in our closest living relative, he says, suggests we may have inherited this ability from our shared common ancestor. When chimps gesture—such as reaching out an arm in a begging gesture—they are most often making a request, says Gal Badihi, an animal communication researcher at the University of St Andrews. This can include things such as “groom me,” “give me,” or “travel with me.” Most of the time, the chimp’s partner does the requested behavior. But sometimes, the second chimp will respond with its own gestures instead—for instance, one chimp requesting grooming, and the other indicating where they would like to be groomed, essentially saying “groom me first.” To figure out whether these interactions resemble human turn taking, Badihi and colleagues combed through hundreds of hours of footage from a massive database of chimpanzee gestural interactions recorded by multiple researchers across decades of fieldwork in East Africa. The scientists studied the footage, describing the precise movements each chimp made when gesturing, the response of other chimps, the duration of the gestures, and other details. © 2024 American Association for the Advancement of Science.

Keyword: Language; Evolution
Link ID: 29403 - Posted: 07.23.2024

By Sara Reardon By eavesdropping on the brains of living people, scientists have created the highest-resolution map yet of the neurons that encode the meanings of various words1. The results hint that, across individuals, the brain uses the same standard categories to classify words — helping us to turn sound into sense. The study is based on words only in English. But it’s a step along the way to working out how the brain stores words in its language library, says neurosurgeon Ziv Williams at the Massachusetts Institute of Technology in Cambridge. By mapping the overlapping sets of brain cells that respond to various words, he says, “we can try to start building a thesaurus of meaning”. The brain area called the auditory cortex processes the sound of a word as it enters the ear. But it is the brain’s prefrontal cortex, a region where higher-order brain activity takes place, that works out a word’s ‘semantic meaning’ — its essence or gist. Previous research2 has studied this process by analysing images of blood flow in the brain, which is a proxy for brain activity. This method allowed researchers to map word meaning to small regions of the brain. But Williams and his colleagues found a unique opportunity to look at how individual neurons encode language in real time. His group recruited ten people about to undergo surgery for epilepsy, each of whom had had electrodes implanted in their brains to determine the source of their seizures. The electrodes allowed the researchers to record activity from around 300 neurons in each person’s prefrontal cortex. © 2024 Springer Nature Limited

Keyword: Language; Brain imaging
Link ID: 29383 - Posted: 07.06.2024

By Dave Philipps David Metcalf’s last act in life was an attempt to send a message — that years as a Navy SEAL had left his brain so damaged that he could barely recognize himself. He died by suicide in his garage in North Carolina in 2019, after nearly 20 years in the Navy. But just before he died, he arranged a stack of books about brain injury by his side, and taped a note to the door that read, in part, “Gaps in memory, failing recognition, mood swings, headaches, impulsiveness, fatigue, anxiety, and paranoia were not who I was, but have become who I am. Each is worsening.” Then he shot himself in the heart, preserving his brain to be analyzed by a state-of-the-art Defense Department laboratory in Maryland. The lab found an unusual pattern of damage seen only in people exposed repeatedly to blast waves. The vast majority of blast exposure for Navy SEALs comes from firing their own weapons, not from enemy action. The damage pattern suggested that years of training intended to make SEALs exceptional was leaving some barely able to function. But the message Lieutenant Metcalf sent never got through to the Navy. No one at the lab told the SEAL leadership what the analysis had found, and the leadership never asked. It was not the first time, or the last. At least a dozen Navy SEALs have died by suicide in the last 10 years, either while in the military or shortly after leaving. A grass-roots effort by grieving families delivered eight of their brains to the lab, an investigation by The New York Times has found. And after careful analysis, researchers discovered blast damage in every single one. It is a stunning pattern with important implications for how SEALs train and fight. But privacy guidelines at the lab and poor communication in the military bureaucracy kept the test results hidden. Five years after Lieutenant Metcalf’s death, Navy leaders still did not know. Until The Times told the Navy of the lab’s findings about the SEALs who died by suicide, the Navy had not been informed, the service confirmed in a statement. © 2024 The New York Times Company

Keyword: Brain Injury/Concussion; Depression
Link ID: 29378 - Posted: 07.03.2024

By Carl Zimmer For thousands of years, philosophers have argued about the purpose of language. Plato believed it was essential for thinking. Thought “is a silent inner conversation of the soul with itself,” he wrote. Many modern scholars have advanced similar views. Starting in the 1960s, Noam Chomsky, a linguist at M.I.T., argued that we use language for reasoning and other forms of thought. “If there is a severe deficit of language, there will be severe deficit of thought,” he wrote. As an undergraduate, Evelina Fedorenko took Dr. Chomsky’s class and heard him describe his theory. “I really liked the idea,” she recalled. But she was puzzled by the lack of evidence. “A lot of things he was saying were just stated as if they were facts — the truth,” she said. Dr. Fedorenko went on to become a cognitive neuroscientist at M.I.T., using brain scanning to investigate how the brain produces language. And after 15 years, her research has led her to a startling conclusion: We don’t need language to think. “When you start evaluating it, you just don’t find support for this role of language in thinking,” she said. When Dr. Fedorenko began this work in 2009, studies had found that the same brain regions required for language were also active when people reasoned or carried out arithmetic. But Dr. Fedorenko and other researchers discovered that this overlap was a mirage. Part of the trouble with the early results was that the scanners were relatively crude. Scientists made the most of their fuzzy scans by combining the results from all their volunteers, creating an overall average of brain activity. © 2024 The New York Times Company

Keyword: Language; Consciousness
Link ID: 29376 - Posted: 07.03.2024

Elephants call out to each other using individual names that they invent for their fellow pachyderms, according to a new study. While dolphins and parrots have been observed addressing each other by mimicking the sound of others from their species, elephants are the first non-human animals known to use names that do not involve imitation, the researchers suggested. For the new study published on Monday, a team of international researchers used an artificial intelligence algorithm to analyse the calls of two wild herds of African savanna elephants in Kenya. The research “not only shows that elephants use specific vocalisations for each individual, but that they recognise and react to a call addressed to them while ignoring those addressed to others”, the lead study author, Michael Pardo, said. The video player is currently playing an ad. “This indicates that elephants can determine whether a call was intended for them just by hearing the call, even when out of its original context,” the behavioural ecologist at Colorado State University said in a statement. The researchers sifted through elephant “rumbles” recorded at Kenya’s Samburu national reserve and Amboseli national park between 1986 and 2022. Using a machine-learning algorithm, they identified 469 distinct calls, which included 101 elephants issuing a call and 117 receiving one. Elephants make a wide range of sounds, from loud trumpeting to rumbles so low they cannot be heard by the human ear. Names were not always used in the elephant calls. But when names were called out, it was often over a long distance, and when adults were addressing young elephants. Adults were also more likely to use names than calves, suggesting it could take years to learn this particular talent. The most common call was “a harmonically rich, low-frequency sound”, according to the study in the journal Nature Ecology & Evolution. © 2024 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 29352 - Posted: 06.11.2024

Ian Sample Science editor Five children who were born deaf now have hearing in both ears after taking part in an “astounding” gene therapy trial that raises hopes for further treatments. The children were unable to hear because of inherited genetic mutations that disrupt the body’s ability to make a protein needed to ensure auditory signals pass seamlessly from the ear to the brain. Doctors at Fudan University in Shanghai treated the children, aged between one and 11, in both ears in the hope they would gain sufficient 3D hearing to take part in conversations and work out which direction sounds were coming from. Within weeks of receiving the therapy, the children had gained hearing, could locate the sources of sounds, and recognised speech in noisy environments. Two of the children were recorded dancing to music, the researchers reported in Nature Medicine. A child facing away from the camera towards a panel of auditory testing equipment with script in the top left corner Dr Zheng-Yi Chen, a scientist at Massachusetts Eye and Ear, a Harvard teaching hospital in Boston that co-led the trial, said the results were “astounding”, adding that researchers continued to see the children’s hearing ability “dramatically progress”. The therapy uses an inactive virus to smuggle working copies of the affected gene, Otof, into the inner ear. Once inside, cells in the ear use the new genetic material as a template to churn out working copies of the crucial protein, otoferlin. Video footage of the patients shows a two-year-old boy responding to his name three weeks after the treatment and dancing to music after 13 weeks, having shown no response to either before receiving the injections. © 2024 Guardian News & Media Limited

Keyword: Hearing; Genes & Behavior
Link ID: 29347 - Posted: 06.06.2024

By Gemma Conroy Researchers have developed biodegradable, wireless sensors that can monitor changes in the brain following a head injury or cancer treatment, without invasive surgery. In rats and pigs, the soft sensors performed just as well as conventional wired sensors for up to a month after being injected under the skull. The gel-based sensors measure key health markers, including temperature, pH and pressure. “It is quite likely this technology will be useful for people in medical settings,” says study co-author Yueying Yang, a biomedical engineer at Huazhong University of Science and Technology (HUST) in Wuhan, China. The findings were published today in Nature1. “It’s a very comprehensive study,” says Christopher Reiche, who develops implantable microdevices at the University of Utah in Salt Lake City. For years, scientists have been developing brain sensors that can be implanted inside the skull. But many of these devices rely on wires to transmit data to clinicians. The wires are difficult to insert and remove, and create openings in the skin for viruses and bacteria to enter the body. Wireless sensors offer a solution to this problem, but are thwarted by their limited communication range and relatively large size. Developing sensors that can access and monitor the brain is “extremely difficult”, says Omid Kavehei, a biomedical engineer who specializes in neurotechnology at the University of Sydney in Australia. To overcome these challenges, Yang and her colleagues created a set of 2-millimetre cube-shaped sensors out of hydrogel, a soft, flexible material that’s often used in tissue regeneration and drug delivery. The gel sensors change shape under different temperatures, pressures and pH conditions, and respond to vibrations caused by variations in blood flow in the brain. When the sensors are implanted under the skull and scanned with an ultrasound probe — a tool that is already used to image the human brain in clinics — these changes are detectable in the form of ultrasonic waves that pass through the skull. The tiny gel-cubes completely dissolve in saline solution after around four months, and begin to break down in the brain after five weeks. © 2024 Springer Nature Limited

Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 29346 - Posted: 06.06.2024

By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation

Keyword: Consciousness; Language
Link ID: 29344 - Posted: 06.06.2024

By Amorina Kingdon Like most humans, I assumed that sound didn’t work well in water. After all, Jacques Cousteau himself called the ocean the “silent world.” I thought, beyond whales, aquatic animals must not use sound much. How wonderfully wrong I was. In water a sound wave travels four and a half times faster, and loses less energy, than in air. It moves farther and faster and carries information better. In the ocean, water exists in layers and swirling masses of slightly different densities, depending on depth, temperature, and saltiness. The physics-astute reader will know that the density of the medium in which sound travels influences its speed. So, as sound waves spread through the sea, their speed changes, causing complex reflection or refraction and bending of the sound waves into “ducts” and “channels.” Under the right circumstances, these ducts and channels can carry sound waves hundreds and even thousands of kilometers. What about other sensory phenomena? Touch and taste work about the same in water as in air. But the chemicals that tend to carry scent move slower in water than in air. And water absorbs light very easily, greatly diminishing visibility. Even away from murky coastal waters, in the clearest seas, light vanishes below several hundred meters and visibility below several dozen. So sound is often the best, if not only, way for ocean and freshwater creatures to signal friends, detect enemies, and monitor the world underwater. And there is much to monitor: Earthquakes, mudslides, and volcanic activity rumble through the oceans, beyond a human’s hearing range. Ice cracks, booms, and scrapes the seafloor. Waves hiss and roar. Raindrops plink. If you listen carefully, you can tell wind speed, rainfall, even drop size, by listening to the ocean as a storm passes. Even snowfall makes a sound. © 2024 NautilusNext Inc.,

Keyword: Animal Communication; Sexual Behavior
Link ID: 29341 - Posted: 06.04.2024

By Sumeet Kulkarni As spring turns to summer in the United States, warming conditions have started to summon enormous numbers of red-eyed periodical cicadas out of their holes in the soil across the east of the country. This year sees an exceptionally rare joint emergence of two cicada broods: one that surfaces every 13 years and another with a 17-year cycle. They last emerged together in 1803, when Thomas Jefferson was US president. This year, billions or even trillions of cicadas from these two broods — each including multiple species of the genus Magicicada — are expected to swarm forests, fields and urban neighbourhoods. To answer readers’ cicada questions, Nature sought help from three researchers. Katie Dana is an entomologist affiliated with the Illinois Natural History Survey at the University of Illinois at Urbana-Champaign. John Lill is an insect ecologist at George Washington University in Washington DC. Fatima Husain is a cognitive neuroscientist at the University of Illinois at Urbana-Champaign. Their answers have been edited for length and clarity. Why do periodical cicadas have red eyes? JL: We’re not really sure. We do know that cicadas’ eyes turn red in the winter before the insects come out. The whole coloration pattern in periodical cicadas is very bright: red eyes, black and orange wings. They’re quite different from the annual cicadas, which are green and black, and more camouflaged. It’s a bit of an enigma why the periodical ones are so brightly coloured, given that it just makes them more obvious to predators. There are no associated defences with being brightly coloured — it kind of flies in the face of what we know about bright coloration in a lot of other animals, where usually it’s some kind of signal for toxicity. There also exist mutants with brown, orange, golden or even blue eyes. People hunt for blue-eyed ones; it’s like trying to find a four-leaf clover. © 2024 Springer Nature Limited

Keyword: Animal Communication; Sexual Behavior
Link ID: 29339 - Posted: 06.04.2024

Sacha Pfeiffer A few weeks ago, at about 6:45 in the morning, I was at home, waiting to talk live on the air with Morning Edition host Michel Martin about a story I'd done, when I suddenly heard a loud metallic hammering. It sounded like a machine was vibrating my house. It happened again about 15 seconds later. And again after that. This rhythmic clatter seemed to be coming from my basement utility closet. Was my furnace breaking? Or my water heater? I worried that it might happen while I was on the air. Luckily, the noise stopped while I spoke with Michel, but restarted later. This time I heard another sound, a warbling or trilling, possibly inside my chimney. Was there an animal in there? I ran outside, looked up at my roof — and saw a woodpecker drilling away at my metal chimney cap. I've seen and heard plenty of woodpeckers hammer on trees. But never on metal. So to find out why the bird was doing this, I called an expert: Kevin McGowan, an ornithologist at the Cornell Lab of Ornithology who recently created a course called "The Wonderful World of Woodpeckers." McGowan said woodpeckers batter wood to find food, make a home, mark territory and attract a mate. But when they bash away at metal, "what the birds are trying to do is make as big a noise as possible," he said, "and a number of these guys have found that — you know what? If you hammer on metal, it's really loud!" Woodpeckers primarily do this during the springtime breeding season, and their metallic racket has two purposes, "basically summarized as: All other guys stay away, all the girls come to me," McGowan said. "And the bigger the noise, the better." © 2024 npr

Keyword: Sexual Behavior; Animal Communication
Link ID: 29333 - Posted: 06.02.2024

By Liqun Luo The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units—neurons and transistors, respectively—that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1 Which has more problem-solving power—the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks—ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one’s lips—let alone conceptualization and creativity. So why is the computer good at certain tasks whereas the brain is better at others? Comparing the computer and the brain has been instructive to both computer engineers and neuroscientists. This comparison started at the dawn of the modern computer era, in a small but profound book entitled The Computer and the Brain, by John von Neumann, a polymath who in the 1940s pioneered the design of a computer architecture that is still the basis of most modern computers today.2 Let’s look at some of these comparisons in numbers (Table 1). © 2024 NautilusNext Inc.,

Keyword: Stroke
Link ID: 29331 - Posted: 05.29.2024

By Amanda Heidt For the first time, a brain implant has helped a bilingual person who is unable to articulate words to communicate in both of his languages. An artificial-intelligence (AI) system coupled to the brain implant decodes, in real time, what the individual is trying to say in either Spanish or English. The findings1, published on 20 May in Nature Biomedical Engineering, provide insights into how our brains process language, and could one day lead to long-lasting devices capable of restoring multilingual speech to people who can’t communicate verbally. “This new study is an important contribution for the emerging field of speech-restoration neuroprostheses,” says Sergey Stavisky, a neuroscientist at the University of California, Davis, who was not involved in the study. Even though the study included only one participant and more work remains to be done, “there’s every reason to think that this strategy will work with higher accuracy in the future when combined with other recent advances”, Stavisky says. The person at the heart of the study, who goes by the nickname Pancho, had a stroke at age 20 that paralysed much of his body. As a result, he can moan and grunt but cannot speak clearly. In his thirties, Pancho partnered with Edward Chang, a neurosurgeon at the University of California, San Francisco, to investigate the stroke’s lasting effects on his brain. In a groundbreaking study published in 20212, Chang’s team surgically implanted electrodes on Pancho’s cortex to record neural activity, which was translated into words on a screen. Pancho’s first sentence — ‘My family is outside’ — was interpreted in English. But Pancho is a native Spanish speaker who learnt English only after his stroke. It’s Spanish that still evokes in him feelings of familiarity and belonging. “What languages someone speaks are actually very linked to their identity,” Chang says. “And so our long-term goal has never been just about replacing words, but about restoring connection for people.” © 2024 Springer Nature Limited

Keyword: Language; Robotics
Link ID: 29321 - Posted: 05.23.2024

By Emily Anthes Half a century ago, one of the hottest questions in science was whether humans could teach animals to talk. Scientists tried using sign language to converse with apes and trained parrots to deploy growing English vocabularies. The work quickly attracted media attention — and controversy. The research lacked rigor, critics argued, and what seemed like animal communication could simply have been wishful thinking, with researchers unconsciously cuing their animals to respond in certain ways. In the late 1970s and early 1980s, the research fell out of favor. “The whole field completely disintegrated,” said Irene Pepperberg, a comparative cognition researcher at Boston University, who became known for her work with an African gray parrot named Alex. Today, advances in technology and a growing appreciation for the sophistication of animal minds have renewed interest in finding ways to bridge the species divide. Pet owners are teaching their dogs to press “talking buttons” and zoos are training their apes to use touch screens. In a cautious new paper, a team of scientists outlines a framework for evaluating whether such tools might give animals new ways to express themselves. The research is designed “to rise above some of the things that have been controversial in the past,” said Jennifer Cunha, a visiting research associate at Indiana University. The paper, which is being presented at a science conference on Tuesday, focuses on Ms. Cunha’s parrot, an 11-year-old Goffin’s cockatoo named Ellie. Since 2019, Ms. Cunha has been teaching Ellie to use an interactive “speech board,” a tablet-based app that contains more than 200 illustrated icons, corresponding to words and phrases including “sunflower seeds,” “happy” and “I feel hot.” When Ellie presses on an icon with her tongue, a computerized voice speaks the word or phrase aloud. In the new study, Ms. Cunha and her colleagues did not set out to determine whether Ellie’s use of the speech board amounted to communication. Instead, they used quantitative, computational methods to analyze Ellie’s icon presses to learn more about whether the speech board had what they called “expressive and enrichment potential.” © 2024 The New York Times Company

Keyword: Language; Epilepsy
Link ID: 29306 - Posted: 05.14.2024

By Miryam Naddaf Scientists have developed brain implants that can decode internal speech — identifying words that two people spoke in their minds without moving their lips or making a sound. Although the technology is at an early stage — it was shown to work with only a handful of words, and not phrases or sentences — it could have clinical applications in future. Similar brain–computer interface (BCI) devices, which translate signals in the brain into text, have reached speeds of 62–78 words per minute for some people. But these technologies were trained to interpret speech that is at least partly vocalized or mimed. The latest study — published in Nature Human Behaviour on 13 May1 — is the first to decode words spoken entirely internally, by recording signals from individual neurons in the brain in real time. “It's probably the most advanced study so far on decoding imagined speech,” says Silvia Marchesotti, a neuroengineer at the University of Geneva, Switzerland. “This technology would be particularly useful for people that have no means of movement any more,” says study co-author Sarah Wandelt, a neural engineer who was at the California Institute of Technology in Pasadena at the time the research was done. “For instance, we can think about a condition like locked-in syndrome.” The researchers implanted arrays of tiny electrodes in the brains of two people with spinal-cord injuries. They placed the devices in the supramarginal gyrus (SMG), a region of the brain that had not been previously explored in speech-decoding BCIs. © 2024 Springer Nature Limited

Keyword: Brain imaging; Language
Link ID: 29302 - Posted: 05.14.2024

By Elizabeth Anne Brown The beluga whale wears its heart on its sleeve — or rather, its forehead. Researchers have created a visual encyclopedia of the different expressions that belugas (Delphinapterus leucas) in captivity seem to make with their highly mobile “melon,” a squishy deposit of fat on the forehead that helps direct sound waves for echolocation. Using muscles and connective tissue, belugas can extend the melon forward until it juts over their lips like the bill of a cap; mush it down until it’s flattened against their skull; lift it vertically to create an impressive fleshy top hat; and shake it with such force that it jiggles like Jell-O. “If that doesn’t scream ‘pay attention to me,’ I don’t know what does,” says animal behaviorist Justin Richard of the University of Rhode Island in Kingston. “It’s like watching a peacock spread their feathers.” Before Richard became a scientist, he spent a decade as a beluga trainer at the Mystic Aquarium in Connecticut, working closely with the enigmatic animals. “Even as a trainer, I knew the shapes meant something,” Richard says. “But nobody had been able to put together enough observations to make sense of it.” Over the course of a year, from 2014 to 2015, Richard and colleagues recorded interactions between four belugas at the Mystic Aquarium. Analyzing the footage revealed that the belugas make five distinct melon shapes the scientists dubbed flat, lift, press, push and shake. The belugas sported an average of nearly two shapes per minute during social interaction, the team reports March 2 in Animal Cognition. © Society for Science & the Public 2000–2024

Keyword: Animal Communication; Evolution
Link ID: 29291 - Posted: 05.03.2024

By Claire Cameron On Aug. 19, 2021, a humpback whale named Twain whupped back. Specifically, Twain made a series of humpback whale calls known as “whups” in response to playback recordings of whups from a boat of researchers off the coast of Alaska. The whale and the playback exchanged calls 36 times. On the boat was naturalist Fred Sharpe of the Alaska Whale Foundation, who has been studying humpbacks for over two decades, and animal behavior researcher Brenda McCowan, a professor at the University of California, Davis. The exchange was groundbreaking, Sharpe says, because it brought two linguistic beings—humans and humpback whales—together. “You start getting the sense that there’s this mutual sense of being heard.” In their 2023 published results, McGowan, Sharpe, and their coauthors are careful not to characterize their exchange with Twain as a conversation. They write, “Twain was actively engaged in a type of vocal coordination” with the playback recordings. To the paper’s authors, the interspecies exchange could be a model for perhaps something even more remarkable: an exchange with an extraterrestrial intelligence. Sharpe and McGowan are members of Whale SETI, a team of scientists at the SETI Institute, which has been scanning the skies for decades, listening for signals that may be indicative of extraterrestrial life. The Whale SETI team seeks to show that animal communication, and particularly, complex animal vocalizations like those of humpback whales, can provide scientists with a model to help detect and decipher a message from an extraterrestrial intelligence. And, while they’ve been trying to communicate with whales for years, this latest reported encounter was the first time the whales talked back. It all might sound far-fetched. But then again, Laurance Doyle, an astrophysicist who founded the Whale SETI team and has been part of the SETI Institute since 1987, is accustomed to being doubted by the mainstream science community. © 2024 NautilusNext Inc.,

Keyword: Animal Communication; Language
Link ID: 29276 - Posted: 04.30.2024

By Saima May Sidik In 2010, Theresa Chaklos was diagnosed with chronic lymphocytic leukaemia — the first in a series of ailments that she has had to deal with since. She’d always been an independent person, living alone and supporting herself as a family-law facilitator in the Washington DC court system. But after illness hit, her independence turned into loneliness. Loneliness, in turn, exacerbated Chaklos’s physical condition. “I dropped 15 pounds in less than a week because I wasn’t eating,” she says. “I was so miserable, I just would not get up.” Fortunately a co-worker convinced her to ask her friends to help out, and her mood began to lift. “It’s a great feeling” to know that other people are willing to show up, she says. Many people can’t break out of a bout of loneliness so easily. And when acute loneliness becomes chronic, the health effects can be far-reaching. Chronic loneliness can be as detrimental as obesity, physical inactivity and smoking according to a report by Vivek Murthy, the US surgeon general. Depression, dementia, cardiovascular disease1 and even early death2 have all been linked to the condition. Worldwide, around one-quarter of adults feel very or fairly lonely, according to a 2023 poll conducted by the social-media firm Meta, the polling company Gallup and a group of academic advisers (see That same year, the World Health Organization launched a campaign to address loneliness, which it called a “pressing health threat”. But why does feeling alone lead to poor health? Over the past few years, scientists have begun to reveal the neural mechanisms that cause the human body to unravel when social needs go unmet. The field “seems to be expanding quite significantly”, says cognitive neuroscientist Nathan Spreng at McGill University in Montreal, Canada. And although the picture is far from complete, early results suggest that loneliness might alter many aspects of the brain, from its volume to the connections between neurons.

Keyword: Stress
Link ID: 29245 - Posted: 04.06.2024

By Emily Makowski & I spend my days surrounded by thousands of written words, and sometimes I feel as though there’s no escape. That may not seem particularly unusual. Plenty of people have similar feelings. But no, I’m not just talking about my job as a copy editor here at Scientific American, where I edit and fact-check an endless stream of science writing. This constant flow of text is all in my head. My brain automatically translates spoken words into written ones in my mind’s eye. I “see” subtitles that I can’t turn off whenever I talk or hear someone else talking. This same speech-to-text conversion even happens for the inner dialogue of my thoughts. This mental closed-captioning has accompanied me since late toddlerhood, almost as far back as my earliest childhood memories. And for a long time, I thought that everyone could “read” spoken words in their head the way I do. What I experience goes by the name of ticker-tape synesthesia. It is not a medical condition—it’s just a distinctive way of perceiving the surrounding world that relatively few people share. Not much is known about the neurophysiology or psychology of this phenomenon, sometimes called “ticker taping,” even though a reference to it first appeared in the scientific literature in the late 19th century. Ticker taping is considered a form of synesthesia, an experience in which the brain reroutes one kind of incoming sensory information so that it is processed as another. For example, sounds might be perceived as touch, allowing the affected person to “feel” them as tactile sensations. As synesthesia goes, ticker taping is relatively uncommon. “There are varieties of synesthesia which really have just been completely under the radar..., and ticker tape is really one of those,” says Mark Price, a cognitive psychologist at the University of Bergen in Norway. The name “ticker-tape synesthesia” itself evokes the concept’s late 19th-century origins. At that time stock prices transmitted by telegraph were printed on long paper strips, which would be torn into tiny bits and thrown from building windows during parades. © 2024 SCIENTIFIC AMERICAN,

Keyword: Attention; Language
Link ID: 29238 - Posted: 04.04.2024

Ian Sample Science editor Dogs understand what certain words stand for, according to researchers who monitored the brain activity of willing pooches while they were shown balls, slippers, leashes and other highlights of the domestic canine world. The finding suggests that the dog brain can reach beyond commands such as “sit” and “fetch”, and the frenzy-inducing “walkies”, to grasp the essence of nouns, or at least those that refer to items the animals care about. “I think the capacity is there in all dogs,” said Marianna Boros, who helped arrange the experiments at Eötvös Loránd University in Hungary. “This changes our understanding of language evolution and our sense of what is uniquely human.” Scientists have long been fascinated by whether dogs can truly learn the meanings of words and have built up some evidence to back the suspicion. A survey in 2022 found that dog owners believed their furry companions responded to between 15 and 215 words. More direct evidence for canine cognitive prowess came in 2011 when psychologists in South Carolina reported that after three years of intensive training, a border collie called Chaser had learned the names of more than 1,000 objects, including 800 cloth toys, 116 balls and 26 Frisbees. However, studies have said little about what is happening in the canine brain when it processes words. To delve into the mystery, Boros and her colleagues invited 18 dog owners to bring their pets to the laboratory along with five objects the animals knew well. These included balls, slippers, Frisbees, rubber toys, leads and other items. At the lab, the owners were instructed to say words for objects before showing their dog either the correct item or a different one. For example, an owner might say “Look, here’s the ball”, but hold up a Frisbee instead. The experiments were repeated multiple times with matching and non-matching objects. © 2024 Guardian News & Media Limited

Keyword: Language; Learning & Memory
Link ID: 29214 - Posted: 03.26.2024