Chapter 19. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2655

By Jennifer Szalai “‘R’s’ are hard,” John Hendrickson writes in his new memoir, “Life on Delay: Making Peace With a Stutter,” committing to paper a string of words that would have caused him trouble had he tried to say them out loud. In November 2019, Hendrickson, an editor at The Atlantic, published an article about then-presidential candidate Joe Biden, who talked frequently about “beating” his childhood stutter — a bit of hyperbole that the article finally laid to rest. Biden insisted on his redemptive narrative, even though Hendrickson, who has stuttered since he was 4, could tell when Biden repeated (“I-I-I-I-I”) or blocked (“…”) on certain sounds. The article went viral, putting Hendrickson in the position of being invited to go on television — a “nightmare,” he said on MSNBC at the time, though it did lead to a flood of letters from fellow stutterers, a number of whom he interviewed for this book. “Life on Delay” traces an arc from frustration and isolation to acceptance and community, recounting a lifetime of bullying and well-meaning but ineffectual interventions and what Hendrickson calls “hundreds of awful first impressions.” When he depicts scenes from his childhood it’s often in a real-time present tense, putting us in the room with the boy he was, more than two decades before. Hendrickson also interviews people: experts, therapists, stutterers, his own parents. He calls up his kindergarten teacher, his childhood best friend and the actress Emily Blunt. He reaches out to others who have published personal accounts of stuttering, including The New Yorker’s Nathan Heller and Katharine Preston, the author of a memoir titled “Out With It.” We learn that it’s only been since the turn of the millennium or so that stuttering has been understood as a neurological disorder; that for 75 percent of children who stutter, “the issue won’t follow them to adulthood”; that there’s still disagreement over whether “disfluency” is a matter of language or motor control, because “the research is still a bit of a mess.” © 2023 The New York Times Company

Keyword: Language; Attention
Link ID: 28643 - Posted: 01.27.2023

By Darren Incorvaia The great apes do not have spoken language, but they share many gestures. Can humans like you understand those gestures too? Watch this short video and test your ability to read chimpanzee body language. What is this chimpanzee (the one scratching its arm) asking the other one to do? © 2023 The New York Times Company

Keyword: Animal Communication; Evolution
Link ID: 28640 - Posted: 01.25.2023

Holly Else An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. “I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts. © 2023 Springer Nature Limited

Keyword: Language; Intelligence
Link ID: 28629 - Posted: 01.14.2023

By Carolyn Wilke Mammals in the ocean swim through a world of sound. But in recent decades, humans have been cranking up the volume, blasting waters with noise from shipping, oil and gas exploration and military operations. New research suggests that such anthropogenic noise may make it harder for dolphins to communicate and work together. When dolphins cooperated on a task in a noisy environment, the animals were not so different from city dwellers on land trying to be heard over a din of jackhammers and ambulance sirens. They yelled, calling louder and longer, researchers reported Thursday in the journal Current Biology. “Even then, there’s a dramatic increase in how often they fail to coordinate,” said Shane Gero, a whale biologist at Carleton University in Ottawa who wasn’t part of the work. The effect of increasing noise was “remarkably clear.” Scientists worked with a dolphin duo, males named Delta and Reese, at an experimental lagoon at the Dolphin Research Center in the Florida Keys. The pair were trained to swim to different spots in their enclosure and push a button within one second of each other. “They’ve always been the most motivated animals. They were really excited about doing the task,” said Pernille Sørensen, a biologist and Ph.D. candidate at the University of Bristol in England. The dolphins talked to each other using whistles and often whistled right before pressing the button, she said. Ms. Sørensen’s team piped in sounds using underwater speakers. Tags, stuck behind the animals’ blowholes, captured what the dolphins heard and called to each other as well as their movements. Through 200 trials with five different sound environments, the team observed how the dolphins changed their behavior to compensate for loud noise. The cetaceans turned their bodies toward each other and paid greater attention to each other’s location. At times, they nearly doubled the length of their calls and amplified their whistles, in a sense shouting, to be heard above cacophonies of white noise or a recording of a pressure washer. © 2023 The New York Times Company

Keyword: Animal Communication; Hearing
Link ID: 28628 - Posted: 01.14.2023

Xiaofan Lei What comes to mind when you think of someone who stutters? Is that person male or female? Are they weak and nervous, or powerful and heroic? If you have a choice, would you like to marry them, introduce them to your friends or recommend them for a job? Your attitudes toward people who stutter may depend partly on what you think causes stuttering. If you think that stuttering is due to psychological causes, such as being nervous, research suggests that you are more likely to distance yourself from those who stutter and view them more negatively. I am a person who stutters and a doctoral candidate in speech, language and hearing sciences. Growing up, I tried my best to hide my stuttering and to pass as fluent. I avoided sounds and words that I might stutter on. I avoided ordering the dishes I wanted to eat at the school cafeteria to avoid stuttering. I asked my teacher to not call on me in class because I didn’t want to deal with the laughter from my classmates when they heard my stutter. Those experiences motivated me to investigate stuttering so that I can help people who stutter, including myself, to better cope with the condition. Get facts about the coronavirus pandemic and the latest research In writing about what the scientific field has to say about stuttering and its biological causes, I hope I can reduce the stigma and misunderstanding surrounding the disorder. The most recognizable characteristics of developmental stuttering are the repetitions, prolongations and blocks in people’s speech. People who stutter may also experience muscle tension during speech and exhibit secondary behaviors, such as tics and grimaces. © 2010–2023, The Conversation US, Inc.

Keyword: Language
Link ID: 28626 - Posted: 01.12.2023

By Ken Belson SHICKLEY, Neb. — Chris Eitzmann seemed to excel at everything until he didn’t. He parlayed a Harvard football captaincy into an invite in 2000 to Patriots training camp. After bouncing around the N.F.L., Eitzmann retired from pro football in 2002, got an M.B.A. from Dartmouth and worked at several big financial firms in Boston, where he and his wife, Mikaela, had four children. By 2015, however, Chris began a descent that has become familiar to former football players afflicted with C.T.E., or chronic traumatic encephalopathy, the degenerative brain disease associated with repeated blows to the head. Chris had loved mountain biking, running and lifting weights, but he quit exercising and drank to excess. After a move to Mikaela’s family farm back in their home state of Nebraska two years later, Chris’s behavior became more alarming. He would disappear for long stretches of the day and neglect his work. His drinking got worse, and she said he would sometimes drive drunk. In December 2021, Chris Eitzmann was found dead in his Boston apartment of alcohol poisoning at 44. Almost a year later, doctors at Boston University found that he had C.T.E., a disease that can still only be diagnosed posthumously. Mikaela said that knowing whether her husband had the disease while he was alive would have markedly changed the final years of his life. “If he had known that it really was something, and not just this endless vacuum of not knowing, if he had an idea that he could have grabbed on to, that clarity and understanding would have been so valuable,” she said. Without treatment options, a C.T.E. diagnosis could provide only clarity for former players such as Eitzmann who have reason to believe they may be affected. But it could eventually help current players make risk assessments about when to give up tackle football and help former players seek treatment. © 2022 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 28561 - Posted: 11.19.2022

By Laura Sanders SAN DIEGO — Scientists have devised ways to “read” words directly from brains. Brain implants can translate internal speech into external signals, permitting communication from people with paralysis or other diseases that steal their ability to talk or type. New results from two studies, presented November 13 at the annual meeting of the Society for Neuroscience, “provide additional evidence of the extraordinary potential” that brain implants have for restoring lost communication, says neuroscientist and neurocritical care physician Leigh Hochberg. Some people who need help communicating can currently use devices that require small movements, such as eye gaze changes. Those tasks aren’t possible for everyone. So the new studies targeted internal speech, which requires a person to do nothing more than think. “Our device predicts internal speech directly, allowing the patient to just focus on saying a word inside their head and transform it into text,” says Sarah Wandelt, a neuroscientist at Caltech. Internal speech “could be much simpler and more intuitive than requiring the patient to spell out words or mouth them.” Neural signals associated with words are detected by electrodes implanted in the brain. The signals can then be translated into text, which can be made audible by computer programs that generate speech. That approach is “really exciting, and reinforces the power of bringing together fundamental neuroscience, neuroengineering and machine learning approaches for the restoration of communication and mobility,” says Hochberg, of Massachusetts General Hospital and Harvard Medical School in Boston, and Brown University in Providence, R.I. © Society for Science & the Public 2000–2022.

Keyword: Brain imaging; Language
Link ID: 28556 - Posted: 11.16.2022

By Alejandro Portilla Navarro Dawn breaks in San Jose, the capital of Costa Rica. The city is still asleep, but the early risers are greeted by a beautiful symphony: Hummingbirds, corn-eaters, yigüirros (clay-colored thrushes), yellow-breasted grosbeaks, blue tanagers, house wrens, warblers and other birds announce that a new day has arrived. Soon the incessant noise of vehicles and their horns, construction, street vendors and more take over, shaping the soundscape of the frenetic routine of hundreds of thousands of people who travel and live in this city. Then, the birds’ songs will slip into the background. “The act of birdsong has two main functions in males: It is to attract females and also to defend their territory from other males,” says Luis Andrés Sandoval Vargas, an ornithologist at the University of Costa Rica. For females in the tropics, he adds, the primary role of their song is to defend territory. Thus, in order to communicate in cities, to keep their territory safe and find mates, birds must find ways to counteract the effects of anthropogenic noise — that is, the noise produced by humans. “The main effect of urban development on song is that many birds sing at higher frequencies,” says Sandoval Vargas. Studies over the past 15 years have found, for example, that blackbirds (Turdus merula), great tits (Parus major) and rufous-collared sparrows (Zonotrichia capensis) sing at higher pitches, with higher minimum frequencies, in urban environments than in rural ones. But the birds’ response to anthropogenic noise may be more complex than that, as Sandoval Vargas found when studying house wrens (Troglodytes aedon). House wrens are small, brown birds — about 10 centimeters tall and weighing 12 grams — that feed on insects and tend to live near humans. In Costa Rica, they are found almost everywhere, but are especially abundant in the cities. “Males sing almost year-round and sing for many hours during the day, and much of their behavior is mediated by vocalizations,” explains Sandoval Vargas. But what makes them ideal for studying adaptations to urban environments is that most of the components of their song are within the same frequency range as the noise that we humans produce. © 2022 Annual Reviews

Keyword: Animal Communication; Evolution
Link ID: 28553 - Posted: 11.16.2022

By Ken Belson AMSTERDAM — For the first time since 2016, one of the most influential groups guiding doctors, trainers and sports leagues on concussions met last month to decide, among other things, if it was time to recognize the causal relationship between repeated head hits and the degenerative brain disease known as C.T.E. Despite mounting evidence and a highly regarded U.S. government agency recently acknowledging the link, the group all but decided it was not. Leaders of the International Consensus Conference on Concussion in Sport, meeting in Amsterdam, signaled that it would continue its long practice of casting doubt on the connection between the ravages of head trauma and sports. C.T.E., or chronic traumatic encephalopathy, was first identified in boxers in 1928 and burst into prominence in 2005, when scientists published their posthumous diagnosis of the disease in the N.F.L. Hall of Fame center Mike Webster, creating an existential crisis for sports such as football and rugby that involve players hitting their heads thousands of times a year. Scientists have spent the past decade analyzing hundreds of brains from athletes and military veterans, and the variable evident in nearly every case of C.T.E. has been their exposure to repeated head trauma. Researchers have also established what they call a dose response between the severity of the C.T.E. and the number of years playing collision sports. After playing down an association between head injuries and brain damage for years, the N.F.L. in 2016 acknowledged that there was a link between football and degenerative brain disorders such as C.T.E. Just days before the conference in Amsterdam, the National Institutes of Health, the biggest funder of brain research in the United States, said that C.T.E. “is caused in part by repeated traumatic brain injuries.” But in one of the final sessions of the three-day conference, one of the leaders of the conference, a neuropsychologist who has received $1.5 million in research funding from the N.F.L., dismissed the work of scientists who have documented C.T.E. in hundreds of athletes and soldiers because he said their studies thus far did not account for other health variables, including heart disease, diabetes and substance abuse. © 2022 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 28543 - Posted: 11.09.2022

By David Grimm “Whooo’s a good boy?” “Whooo’s a pretty kitty?” When it comes to communicating with our pets, most of us can’t help but talk to them like babies. We pitch our voices high, extend our vowels, and ask short, repetitive questions. Dogs seem to like this. They’re far more likely to pay attention to us when we use this “caregiver speech,” research has shown. Now, scientists have found the same is true for cats, though only when their owner is talking. The work adds evidence that cats—like dogs—may bond with us in some of the same ways infants do. “It’s a fascinating study,” says Kristyn Vitale, an animal behaviorist and expert on cat cognition at Unity College, who was not involved with the work. “It further supports the idea that our cats are always listening to us.” Charlotte de Mouzon had a practical reason for getting into this line of research. An ethologist at Paris Nanterre University, she had previously been a cat behaviorist, consulting with owners on how to solve everything from litter box problems to aggressive behavior. “Sometimes people would ask me, ‘What’s the scientific evidence behind your approaches?’” she says. “I was frustrated that there were no studies being done on cat behavior in France.” So, she began a Ph.D. and was soon studying cat-human communication. As a first step, de Mouzon confirmed what most cat owners already know: We dip into “baby talk” when we address our feline friends–a habit de Mouzon is guilty of herself. “What’s up, my little ones?” she finds herself asking in a high-pitched voice when greeting her two kitties, Mila and Shere Khan. But do cats, like dogs, actually respond more to this “cat-directed speech”? To find out, de Mouzon recruited 16 cats and their owners—students at the Alfort National Veterinary School just outside of Paris. Because cats can be challenging to work with, de Mouzon studied them on feline-friendly turf, converting a common room in the students’ dormitory into a makeshift animal behavior lab filled with toys, a litter box, and places to hide.

Keyword: Animal Communication; Language
Link ID: 28525 - Posted: 10.26.2022

By Lisa Sanders, M.D. “What just happened?” The 16-year-old girl’s voice was flat and tired. “I think you had a seizure,” her mother answered. Her daughter had asked to be taken to the pediatrician’s office because she hadn’t felt right for the past several weeks — not since she had what looked like a seizure at school. And now she’d had another. “You’re OK now,” the mother continued. “It’s good news because it means that maybe we finally figured out what’s going on.” To most people, that might have been a stretch — to call having a seizure good news. But for the past several years, the young woman had been plagued by headaches, episodes of dizziness and odd bouts of profound fatigue, and her mother embraced the possibility of a treatable disorder. The specialists she had taken her daughter to see attributed her collection of symptoms to the lingering effect of the many concussions she suffered playing sports. She had at least one concussion every year since she was in the fourth grade. Because of her frequent head injuries, her parents made her drop all her sports. Even when not on the playing field, the young woman continued to fall and hit her head. The headaches and other symptoms persisted long after each injury. She saw several specialists who agreed that she had what was called persistent post-concussive syndrome — symptoms caused either by a severe brain injury or, in her case, repeated mild injuries. She should get better with time and patience, the girl and her mother were told. And yet her head pounded and she retreated to her darkened room several times a week. She did everything her doctors suggested: She got plenty of sleep, rested when she was tired and tried to be patient. But she still got headaches, still got dizzy. She found it harder and harder to pay attention. For the past couple of years, it had even started to affect her grades. © 2022 The New York Times Company

Keyword: Epilepsy; Attention
Link ID: 28514 - Posted: 10.15.2022

By Laura Sanders In a football game on September 25, Miami Dolphins quarterback Tua Tagovailoa got the pass off but he got knocked down. Fans watched him shake his head and stumble to the ground as he tried to jog it off. After a medical check, he went back into the game against the Buffalo Bills with what his coach later said was a back injury. Four days later, in a game against the Cincinnati Bengals, Tagovailoa, 24, got hit again. This time, he left the field on a stretcher with what was later diagnosed as a concussion. Many observers suspect that the first hit — given Tagovailoa’s subsequent headshaking and wobbliness — left the athlete with a concussion, also called a mild traumatic brain injury. If those were indeed signs of a head injury, that first hit may have lined him up for an even worse brain injury just days later. “The science tells us that yes, a person who is still recovering from a concussion is at an elevated risk for sustaining another concussion,” says Kristen Dams-O’Connor, a neuropsychologist and director of the Brain Injury Research Center at the Icahn School of Medicine at Mount Sinai in New York City. As one example, a concussion roughly doubled the chance of a second one among young Swedish men, researchers reported in 2013 in the British Medical Journal. “This, I think, was avoidable,” Dams-O’Connor says of Tagovailoa’s brain injury in the game against the Bengals. After a hit to the head, when the soft brain hits the unyielding skull, the injury kicks off a cascade of changes. Some nerve cells become overactive, inflammation sets in, and blood flow is altered. These downstream events in the brain — and how they relate to concussion symptoms — can happen over hours and days, and are not easy to quickly measure, Dams-O’Connor says. © Society for Science & the Public 2000–2022.

Keyword: Brain Injury/Concussion
Link ID: 28506 - Posted: 10.08.2022

By Dan Diamond A high-profile NFL injury has put the spotlight back on football’s persistent concussions, which are linked to head trauma and a variety of long-lasting symptoms, and can be worsened by rushing back to physical activity. Miami Dolphins quarterback Tua Tagovailoa, who appeared to suffer head trauma in a game Sunday afternoon that was later described as a back injury, was diagnosed with a concussion Thursday night following a tackle. After Tagovailoa’s head hit the turf, he remained on the ground and held his arms and fingers splayed in front of his face — which experts said evoked conditions known as “decorticate posturing” or “fencing response,” where brain damage triggers the involuntary reaction. “It’s a potentially life-threatening brain injury,” said Chris Nowinski, a neuroscientist and co-founder of the Concussion Legacy Foundation, a nonprofit group focused on concussion research and prevention, adding that he worried about Tagovailoa’s long-term prognosis, given that it can take months or years for an athlete to fully recover from repeated concussions. Nowinski said he was particularly concerned about situations where people suffer two concussions within a short period — a condition sometimes known as second impact syndrome — which can lead to brain swelling and other persistent problems. “That’s why we should at least be cautious with the easy stuff, like withholding players with a concussion from the game and letting their brain recover,” Nowinski said. The Dolphins said Tagovailoa had movement in all of his extremities and had been discharged Thursday night from University of Cincinnati Medical Center. The NFL’s top health official said in an interview on Friday that he was worried about Tagovailoa’s health, and pointed to a joint review the league and its players association was conducting into the Dolphins’ handling of the quarterback’s initial injury on Sunday.

Keyword: Brain Injury/Concussion
Link ID: 28501 - Posted: 10.05.2022

By Darren Incorvaia Songbirds get a lot of love for their dulcet tones, but drummers may start to steal some of that spotlight. Woodpeckers, which don’t sing but do drum on trees, have brain regions that are similar to those of songbirds, researchers report September 20 in PLOS Biology. The finding is surprising because songbirds use these regions to learn their songs at an early age, yet it’s not clear if woodpeckers learn their drum beats (SN: 9/16/21). Whether woodpeckers do or not, the result suggests a shared evolutionary origin for both singing and drumming. The ability to learn vocalizations by listening to them, just like humans do when learning to speak, is a rare trait in the animal kingdom. Vocal learners, such as songbirds, hummingbirds and parrots, have independently evolved certain clusters of nerve cells called nuclei in their forebrains that control the ability. Animals that don’t learn vocally are thought to lack these brain features. While it’s commonly assumed that other birds don’t have these nuclei, “there’s thousands of birds in the world,” says Matthew Fuxjager, a biologist at Brown University in Providence, R.I. “While we say these brain regions only exist in these small groups of species, nobody’s really looked in a lot of these other taxa.” Fuxjager and his colleagues examined the noggins of several birds that don’t learn vocally to check if they really did lack these brain nuclei. Using molecular probes, the team checked the bird brains for activity of a gene called parvalbumin, a known marker of the vocal learning nuclei. Many of the birds, including penguins and flamingos, came up short, but there was one exception — male and female woodpeckers, which had three spots in their brains with high parvalbumin activity. © Society for Science & the Public 2000–2022.

Keyword: Animal Communication; Language
Link ID: 28486 - Posted: 09.21.2022

By Jonathan Moens An artificial intelligence can decode words and sentences from brain activity with surprising — but still limited — accuracy. Using only a few seconds of brain activity data, the AI guesses what a person has heard. It lists the correct answer in its top 10 possibilities up to 73 percent of the time, researchers found in a preliminary study. The AI’s “performance was above what many people thought was possible at this stage,” says Giovanni Di Liberto, a computer scientist at Trinity College Dublin who was not involved in the research. Developed at the parent company of Facebook, Meta, the AI could eventually be used to help thousands of people around the world unable to communicate through speech, typing or gestures, researchers report August 25 at arXiv.org. That includes many patients in minimally conscious, locked-in or “vegetative states” — what’s now generally known as unresponsive wakefulness syndrome (SN: 2/8/19). Most existing technologies to help such patients communicate require risky brain surgeries to implant electrodes. This new approach “could provide a viable path to help patients with communication deficits … without the use of invasive methods,” says neuroscientist Jean-Rémi King, a Meta AI researcher currently at the École Normale Supérieure in Paris. King and his colleagues trained a computational tool to detect words and sentences on 56,000 hours of speech recordings from 53 languages. The tool, also known as a language model, learned how to recognize specific features of language both at a fine-grained level — think letters or syllables — and at a broader level, such as a word or sentence. © Society for Science & the Public 2000–2022.

Keyword: Language; Robotics
Link ID: 28470 - Posted: 09.10.2022

By Helen Santoro I barreled into the world — a precipitous birth, the doctors called it — at a New York City hospital in the dead of night. In my first few hours of life, after six bouts of halted breathing, the doctors rushed me to the neonatal intensive care unit. A medical intern stuck his pinky into my mouth to test the newborn reflex to suck. I didn’t suck hard enough. So they rolled my pink, 7-pound-11-ounce body into a brain scanner. Lo and behold, there was a huge hole on the left side, just above my ear. I was missing the left temporal lobe, a region of the brain involved in a wide variety of behaviors, from memory to the recognition of emotions, and considered especially crucial for language. My mother, exhausted from the labor, remembers waking up after sunrise to a neurologist, pediatrician and midwife standing at the foot of her bed. They explained that my brain had bled in her uterus, a condition called a perinatal stroke. They told her I would never speak and would need to be institutionalized. The neurologist brought her arms up to her chest and contorted her wrists to illustrate the physical disability I would be likely to develop. In those early days of my life, my parents wrung their hands wondering what my life, and theirs, would look like. Eager to find answers, they enrolled me in a research project at New York University tracking the developmental effects of perinatal strokes. But month after month, I surprised the experts, meeting all of the typical milestones of children my age. I enrolled in regular schools, excelled in sports and academics. The language skills the doctors were most worried about at my birth — speaking, reading and writing — turned out to be my professional passions. My case is highly unusual but not unique. Scientists estimate that thousands of people are, like me, living normal lives despite missing large chunks of our brains. Our myriad networks of neurons have managed to rewire themselves over time. But how? © 2022 The New York Times Company

Keyword: Development of the Brain; Language
Link ID: 28466 - Posted: 09.07.2022

By Emily Anthes My cat is a bona fide chatterbox. Momo will meow when she is hungry and when she is full, when she wants to be picked up and when she wants to be put down, when I leave the room or when I enter it, or sometimes for what appears to be no real reason at all. But because she is a cat, she is also uncooperative. So the moment I downloaded MeowTalk Cat Translator, a mobile app that promised to convert Momo’s meows into plain English, she clammed right up. For two days I tried, and failed, to solicit a sound. On Day 3, out of desperation, I decided to pick her up while she was wolfing down her dinner, an interruption guaranteed to elicit a howl of protest. Right on cue, Momo wailed. The app processed the sound, then played an advertisement for Sara Lee, then rendered a translation: “I’m happy!” I was dubious. But MeowTalk provided a more plausible translation about a week later, when I returned from a four-day trip. Upon seeing me, Momo meowed and then purred. “Nice to see you,” the app translated. Then: “Let me rest.” (The ads disappeared after I upgraded to a premium account.) The urge to converse with animals is age-old, long predating the time when smartphones became our best friends. Scientists have taught sign language to great apes, chatted with grey parrots and even tried to teach English to bottlenose dolphins. Pets — with which we share our homes but not a common language — are particularly tempting targets. My TikTok feed brims with videos of Bunny, a sheepadoodle who has learned to press sound buttons that play prerecorded phrases like “outside,” “scritches” and “love you.” MeowTalk is the product of a growing interest in enlisting additional intelligences — machine-learning algorithms — to decode animal communication. The idea is not as far-fetched as it may seem. For example, machine-learning systems, which are able to extract patterns from large data sets, can distinguish between the squeaks that rodents make when they are happy and those that they emit when they are in distress. Applying the same advances to our creature companions has obvious appeal. “We’re trying to understand what cats are saying and give them a voice” Javier Sanchez, a founder of MeowTalk, said. “We want to use this to help people build better and stronger relationships with their cats,” he added. © 2022 The New York Times Company

Keyword: Animal Communication; Learning & Memory
Link ID: 28458 - Posted: 08.31.2022

By Carl Zimmer One of the most remarkable things about our species is how fast human culture can change. New words can spread from continent to continent, while technologies such as cellphones and drones change the way people live around the world. It turns out that humpback whales have their own long-range, high-speed cultural evolution, and they don’t need the internet or satellites to keep it running. In a study published on Tuesday, scientists found that humpback songs easily spread from one population to another across the Pacific Ocean. It can take just a couple of years for a song to move several thousand miles. Ellen Garland, a marine biologist at the University of St. Andrews in Scotland and an author of the study, said she was shocked to find whales in Australia passing their songs to others in French Polynesia, which in turn gave songs to whales in Ecuador. “Half the globe is now vocally connected for whales,” she said. “And that’s insane.” It’s even possible that the songs travel around the entire Southern Hemisphere. Preliminary studies by other scientists are revealing whales in the Atlantic Ocean picking up songs from whales the eastern Pacific. Each population of humpback whales spends the winter in the same breeding grounds. The males there sing loud underwater songs that can last up to half an hour. Males in the same breeding ground sing a nearly identical tune. And from one year to the next, the population’s song gradually evolves into a new melody. Dr. Garland and other researchers have uncovered a complex, language-like structure in these songs. The whales combine short sounds, which scientists call units, into phrases. They then combine the phrases into themes. And each song is made of several themes. © 2022 The New York Times Company

Keyword: Animal Communication; Language
Link ID: 28456 - Posted: 08.31.2022

Jason Bruck Bottlenose dolphins’ signature whistles just passed an important test in animal psychology. A new study by my colleagues and me has shown that these animals may use their whistles as namelike concepts. By presenting urine and the sounds of signature whistles to dolphins, my colleagues Vincent Janik, Sam Walmsey and I recently showed that these whistles act as representations of the individuals who own them, similar to human names. For behavioral biologists like us, this is an incredibly exciting result. It is the first time this type of representational naming has been found in any other animal aside from humans. When you hear your friend’s name, you probably picture their face. Likewise, when you smell a friend’s perfume, that can also elicit an image of the friend. This is because humans build mental pictures of each other using more than just one sense. All of the different information from your senses that is associated with a person converges to form a mental representation of that individual - a name with a face, a smell and many other sensory characteristics. Within the first few months of life, dolphins invent their own specific identity calls – called signature whistles. Dolphins often announce their location to or greet other individuals in a pod by sending out their own signature whistles. But researchers have not known if, when a dolphin hears the signature whistle of a dolphin they are familiar with, they actively picture the calling individual. My colleagues and I were interested in determining if dolphin calls are representational in the same way human names invoke many thoughts of an individual. Because dolphins cannot smell, they rely principally on signature whistles to identify each other in the ocean. Dolphins can also copy another dolphin’s whistles as a way to address each other. My previous research showed that dolphins have great memory for each other’s whistles, but scientists argued that a dolphin might hear a whistle, know it sounds familiar, but not remember who the whistle belongs to. My colleagues and I wanted to determine if dolphins could associate signature whistles with the specific owner of that whistle. This would address whether or not dolphins remember and hold representations of other dolphins in their minds. © 2010–2022, The Conversation US, Inc.

Keyword: Language; Evolution
Link ID: 28441 - Posted: 08.24.2022

A study funded by the National Institutes of Health found that biomarkers present in the blood on the day of a traumatic brain injury (TBI) can accurately predict a patient’s risk of death or severe disability six months later. Measuring these biomarkers may enable a more accurate assessment of patient prognosis following TBI, according to results published today in Lancet Neurology. Researchers with the Transforming Research and Clinical Knowledge in TBI (TRACK-TBI(link is external)) study examined levels of glial fibrillary acidic protein (GFAP) and ubiquitin carboxy-terminal hydrolase L1 (UCH-L1)—proteins found in glial cells and neurons, respectively—in nearly 1,700 patients with TBI. TRACK-TBI is an observational study aimed at improving understanding and diagnosis of TBIs to develop successful treatments. The study team measured the biomarkers in blood samples taken from patients with TBI on the day of their injury and then evaluated their recovery six months later. Participants were recruited from 18 high-level trauma centers across the United States. More than half (57%) had suffered TBI as the result of a road traffic accident. The study showed that GFAP and UCH-L1 levels on the day of injury were strong predictors of death and unfavorable outcomes, such as vegetative state or severe disability requiring daily assistance to function. Those with biomarker levels among the highest fifth were at greatest risk of death in the six months post-TBI, with most occurring within the first month. GFAP and UCH-1 are currently used to aid in the detection of TBI. Elevated levels in the blood on the day of the TBI are linked to brain injury visible with neuroimaging. In 2018, the U.S. Food and Drug Administration approved use of these biomarkers to help clinicians decide whether to order a head CT scan to examine the brain after mild TBI.

Keyword: Brain Injury/Concussion
Link ID: 28433 - Posted: 08.13.2022