Chapter 19. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 2699

By R. Douglas Fields One day, while threading a needle to sew a button, I noticed that my tongue was sticking out. The same thing happened later, as I carefully cut out a photograph. Then another day, as I perched precariously on a ladder painting the window frame of my house, there it was again! What’s going on here? I’m not deliberately protruding my tongue when I do these things, so why does it keep making appearances? After all, it’s not as if that versatile lingual muscle has anything to do with controlling my hands. Right? Yet as I would learn, our tongue and hand movements are intimately interrelated at an unconscious level. This peculiar interaction’s deep evolutionary roots even help explain how our brain can function without conscious effort. A common explanation for why we stick out our tongue when we perform precision hand movements is something called motor overflow. In theory, it can take so much cognitive effort to thread a needle (or perform other demanding fine motor skills) that our brain circuits get swamped and impinge on adjacent circuits, activating them inappropriately. It’s certainly true that motor overflow can happen after neural injury or in early childhood when we are learning to control our bodies. But I have too much respect for our brains to buy that “limited brain bandwidth” explanation. How, then, does this peculiar hand-mouth cross-talk really occur? Tracing the neural anatomy of tongue and hand control to pinpoint where a short circuit might happen, we find first of all that the two are controlled by completely different nerves. This makes sense: A person who suffers a spinal cord injury that paralyzes their hands does not lose their ability to speak. That’s because the tongue is controlled by a cranial nerve, but the hands are controlled by spinal nerves. Simons Foundation

Keyword: Language; Emotions
Link ID: 28894 - Posted: 08.30.2023

In a study of 152 deceased athletes less than 30 years old who were exposed to repeated head injury through contact sports, brain examination demonstrated that 63 (41%) had chronic traumatic encephalopathy (CTE), a degenerative brain disorder associated with exposure to head trauma. Neuropsychological symptoms were severe in both those with and without evidence of CTE. Suicide was the most common cause of death in both groups, followed by unintentional overdose. Among the brain donors found to have CTE, 71% had played contact sports at a non-professional level (youth, high school, or college competition). Common sports included American football, ice hockey, soccer, rugby, and wrestling. The study, published in JAMA Neurology, confirms that CTE can occur even in young athletes exposed to repetitive head impacts. The research was supported in part by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health. Because CTE cannot be definitively diagnosed in individuals while living, it is unknown how commonly CTE occurs in such athletes. As in all brain bank studies, donors differ from the general population and no estimates of prevalence can be concluded from this research. Most of the study donors were white, male football players with cognitive, behavioral, and/or mood symptoms. Their families desired neuropathologic examination after their loved one’s early death and donated to the Understanding Neurologic Injury and Traumatic Encephalopathy (UNITE) Brain Bank. There were no differences in cause of death or clinical symptoms between those with CTE and those without.

Keyword: Brain Injury/Concussion
Link ID: 28889 - Posted: 08.30.2023

By Pam Belluck At Ann Johnson’s wedding reception 20 years ago, her gift for speech was vividly evident. In an ebullient 15-minute toast, she joked that she had run down the aisle, wondered if the ceremony program should have said “flutist” or “flautist” and acknowledged that she was “hogging the mic.” Just two years later, Mrs. Johnson — then a 30-year-old teacher, volleyball coach and mother of an infant — had a cataclysmic stroke that paralyzed her and left her unable to talk. On Wednesday, scientists reported a remarkable advance toward helping her, and other patients, speak again. In a milestone of neuroscience and artificial intelligence, implanted electrodes decoded Mrs. Johnson’s brain signals as she silently tried to say sentences. Technology converted her brain signals into written and vocalized language, and enabled an avatar on a computer screen to speak the words and display smiles, pursed lips and other expressions. The research, published in the journal Nature, demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals, experts say. Mrs. Johnson chose the avatar, a face resembling hers, and researchers used her wedding toast to develop the avatar’s voice. “We’re just trying to restore who people are,” said the team’s leader, Dr. Edward Chang, the chairman of neurological surgery at the University of California, San Francisco. “It let me feel like I was a whole person again,” Mrs. Johnson, now 48, wrote to me. The goal is to help people who cannot speak because of strokes or conditions like cerebral palsy and amyotrophic lateral sclerosis. To work, Mrs. Johnson’s implant must be connected by cable from her head to a computer, but her team and others are developing wireless versions. Eventually, researchers hope, people who have lost speech may converse in real time through computerized pictures of themselves that convey tone, inflection and emotions like joy and anger. “What’s quite exciting is that just from the surface of the brain, the investigators were able to get out pretty good information about these different features of communication,” said Dr. Parag Patil, a neurosurgeon and biomedical engineer at the University of Michigan, who was asked by Nature to review the study before publication. © 2023 The New York Times Company

Keyword: Stroke; Robotics
Link ID: 28882 - Posted: 08.24.2023

Jon Hamilton If you've ever had trouble finding your keys or remembering what you had for breakfast, you know that short-term memory is far from perfect. For people who've had a traumatic brain injury (TBI), though, recalling recent events or conversations can be a major struggle. "We have patients whose family cannot leave them alone at home because they will turn on the stove and forget to turn it off," says Dr. Ramon Diaz-Arrastia, who directs the Traumatic Brain Injury Clinical Research Center at the University of Pennsylvania. So Arrastia and a team of scientists have been testing a potential treatment. It involves delivering a pulse of electricity to the brain at just the right time. And it worked in a study of eight people with moderate or severe TBIs, the team reports in the journal Brain Stimulation. A precisely timed pulse to a brain area just behind the ear improved recall by about 20 percent and reduced the person's memory deficit by about half. If the results pan out in a larger study, the approach might improve the lives of many young people who survive a serious TBI, says Diaz-Arrastia, an author of the study and a professor of neurology at Penn. "In many cases, the reason they're unable to rejoin and fully participate in society is because of their memory problems," he says. "And they often have this disability that goes on for many, many decades." But the treatment is not for the timid. It requires patients to have electrodes surgically implanted in their brain. And scientists are still refining the system that delivers the electrical pulses. More than 1.5 million people in the U.S. sustain a TBI each year. Common causes include falls, motor vehicle accidents, assaults, contact sports, and gunshots. © 2023 npr

Keyword: Brain Injury/Concussion
Link ID: 28870 - Posted: 08.09.2023

By Tanvi Dutta Gupta The Arctic Ocean is a noisy place. Creatures of the deep have learned to live with the cacophony of creaking ice sheets and breaking icebergs, but humanmade sources of noise from ships and oil and gas infrastructure are altering that natural submarine soundscape. Now, a research team has found that even subtle underwater noise pollution can cause narwhals to make shallower dives and cut their hunts short. The research, published today in Science Advances, uncovers “some really great information on a species we know very little about,” says Ari Friedlaender, an ocean ecologist at the University of California, Santa Cruz, not involved in the study. Knowing how the whales react to these noises could help conservationists “act proactively” to protect the animals in their Arctic home where warming waters already threaten their lifestyles. Narwhals—with their long, unicornlike horns extending from their faces—live in one of the most extreme environments in the world, explains Outi Tervo, an ecologist at the Greenland Institute of Natural Resources and the study’s first author. Each narwhal returns in summer to the same small fjord where it was born in order to feed on fish, squid, and shrimp. As humans increasingly encroach on Arctic waters, though, scientists, conservationists, and Inuit communities have worried about how development and ship traffic will affect the whales. Many of Greenland’s Inuit communities rely on the narwhals as a culturally important food source. When Greenland’s government started to auction new permits for offshore oil exploration in 2011, Tervo and colleagues decided to examine whether the noise pollution associated with such development affected narwhals. For instance, boats exploring the sea floor tow instruments called airguns, which blast air a few meters below the vessels to sonically suss out the presence of cavities that may contain oil and gas. Those pulses can be the “loudest sound put in the ocean by humans,” says study co-author Susanna Blackwell, a biologist with Greeneridge Sciences.

Keyword: Animal Communication; Hearing
Link ID: 28858 - Posted: 07.27.2023

By McKenzie Prillaman When speaking to young kids, humans often use squeaky, high-pitched baby talk. It turns out that some dolphins do, too. Bottlenose dolphin moms modify their individually distinctive whistles when their babies are nearby, researchers report June 26 in the Proceedings of the National Academy of Sciences. This “parentese” might enhance attention, bonding and vocal learning in calves, as it seems to do in humans. During the first few months of life, each common bottlenose dolphin (Tursiops truncatus) develops a unique tune, or signature whistle, akin to a name (SN: 7/22/13). The dolphins shout out their own “names” in the water “likely as a way to keep track of each other,” says marine biologist Laela Sayigh of the Woods Hole But dolphin moms seem to tweak that tune in the presence of their calves, which tend to stick by mom’s side for three to six years. It’s a change that Sayigh first noticed in a 2009 study published by her student. But “it was just one little piece of this much larger study,” she says. To follow up on that observation, Sayigh and colleagues analyzed signature whistles from 19 female dolphins both with and without their babies close by. Audio recordings were captured from a wild population that lives near Sarasota Bay, Fla., during catch-and-release health assessments that occurred from 1984 to 2018. The researchers examined 40 instances of each dolphin’s signature whistle, verified by the unique way each vocalization’s frequencies change over time. Half of each dolphin’s whistles were voiced in the presence of her baby. When youngsters were around, the moms’ whistles contained, on average, a higher maximum and slightly lower minimum pitch compared with those uttered in the absence of calves, contributing to an overall widened pitch range. © Society for Science & the Public 2000–2023.

Keyword: Language; Animal Communication
Link ID: 28835 - Posted: 06.28.2023

By Ken Belson and Benjamin Mueller When Jeffrey Vlk played running back in high school in the 1990s and then safety in college, he took and delivered countless tackles during full-contact football practices. Hitting was a mainstay, as were injuries, including concussions. When he became a coach at Buffalo Grove High School outside Chicago in 2005, Vlk did what he had been taught: He had his players hit and tackle in practices to “toughen them up.” By the time he became head coach in 2016, though, he saw that many of his players were so banged up from a week of hitting in practice that they missed games or were more susceptible to being injured in those games. So, starting in 2019, Vlk eliminated full-contact practices. Players wore shoulder pads once a week, on Wednesday, which he called contact day. That’s when they hit tackle bags and crash pads, and wrapped up teammates but did not throw them to the ground. Vlk said no starting player had been injured at his practices in four years. “Those types of injuries can stay with you for a long time,” he said, “and knowing that I’m keeping the kids safe, not just in our program, but beyond the program, is reason enough to go this route.” Vlk’s approach to limiting the number of hits players take has been spreading slowly in the football world, where much of the effort has focused on avoiding and treating concussions, which often have observable symptoms and are tracked by sports leagues. But researchers have for years posited that the more hits to the head a player receives — even subconcussive ones, which are usually not tracked — the more likely he is to develop cognitive and neurological problems later in life. A new study published on Tuesday in the scientific journal Nature Communications added a critical wrinkle: A football player’s chances of developing chronic traumatic encephalopathy, or C.T.E., are related to the number of head impacts absorbed, but also to the cumulative impact of all those hits. © 2023 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 28827 - Posted: 06.21.2023

Katharine Sanderson An influential team of researchers has updated the scientific consensus on how concussions in sports should be defined, treated and monitored. But critics say that the statement, which is revised every 4 to 5 years, excludes evidence that links head injuries in sport with long-term brain conditions such as CTE — a high-profile issue in games such as American football and soccer. The consensus statement, compiled by 114 co-authors after the International Conference on Concussion in Sport in Amsterdam last October, summarizes the latest evidence on sports-related concussions to help clinicians manage the trauma. The latest version introduces details including a description of brain-chemistry events that happen after a concussion. It was published in the British Journal of Sports Medicine on 14 June1. Why sports concussions are worse for women But some researchers have criticized the authors’ work. “Their refusal to acknowledge a causal relationship between contact-sports participation and CTE [chronic traumatic encephalopathy] is a danger to the public,” says Chris Nowinski, a neuroscientist and chief executive of the Concussion Legacy Foundation in Middletown, Delaware, which supports athletes and veterans affected by concussions and CTE. Many studies have linked repeated sports-related head injuries with CTE — a degenerative brain disease that can develop into dementia. But the consensus authors say that these studies use data from brain banks — where former athletes donate their tissue to be studied after death — that they say is not rigorous enough to be included in their review. “The CTE literature is almost exclusively case series studies,” says clinician Bob Cantu, a co-author of the consensus report at the Boston University School of Medicine in Massachusetts. “And that literature did not meet the inclusion criteria for the systematic review.” © 2023 Springer Nature Limited

Keyword: Brain Injury/Concussion
Link ID: 28823 - Posted: 06.17.2023

By Marlowe Starling When a bird sings, you may think you’re hearing music. But are the melodies it’s making really music? Or is what we’re hearing merely a string of lilting calls that appeals to the human ear? Birdsong has inspired musicians from Bob Marley to Mozart and perhaps as far back as the first hunter-gatherers who banged out a beat. And a growing body of research is showing that the affinity human musicians feel toward birdsong has a strong scientific basis. Scientists are understanding more about avian species’ ability to learn, interpret and produce songs much like our own. Just like humans, birds learn songs from each other and practice to perfect them. And just as human speech is distinct from human music, bird calls, which serve as warnings and other forms of direct communication, differ from birdsong. While researchers are still debating the functions of birdsong, studies show that it is structurally similar to our own tunes. So, are birds making music? That depends on what you mean. “I’m not sure we can or want to define music,” said Ofer Tchernichovski, a zoologist and psychologist at the City University of New York who studies birdsong. Where you draw the line between music and mere noise is arbitrary, said Emily Doolittle, a zoomusicologist and composer at the Royal Conservatoire of Scotland. The difference between a human baby’s babbling versus a toddler’s humming might seem more distinct than that of a hatchling’s cry for food and a maturing bird’s practicing of a melody, she added. Wherever we draw the line, birdsong and human song share striking similarities. How birds build songs Existing research points to one main conclusion: Birdsong is structured like human music. Songbirds change their tempo (speed), pitch (how high or low they sing) and timbre (tone) to sing tunes that resemble our own melodies. © 2023 The New York Times Company

Keyword: Animal Communication; Language
Link ID: 28817 - Posted: 06.07.2023

By Matteo Wong If you are willing to lie very still in a giant metal tube for 16 hours and let magnets blast your brain as you listen, rapt, to hit podcasts, a computer just might be able to read your mind. Or at least its crude contours. Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them—gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind. The program analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows: Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed. Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when one subject heard “I don’t have my driver’s license yet,” the program deciphered the person’s brain scans and returned “She has not even started to learn to drive yet”—not a word-for-word re-creation, but a close approximation of the idea expressed in the original sentence. The program was also able to look at fMRI data of people watching short films and write approximate summaries of the clips, suggesting the AI was capturing not individual words from the brain scans, but underlying meanings. The findings, published in Nature Neuroscience earlier this month, add to a new field of research that flips the conventional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and recent voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, send outputs to one another to achieve a desired result. Yet even as human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains has remained a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. It’s “unquestionably leading to advances that we just couldn’t imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT. Copyright (c) 2023 by The Atlantic Monthly Group.

Keyword: Brain imaging; Language
Link ID: 28802 - Posted: 05.27.2023

By Marla Broadfoot In Alexandre Dumas’s classic novel The Count of Monte-Cristo, a character named Monsieur Noirtier de Villefort suffers a terrible stroke that leaves him paralyzed. Though he remains awake and aware, he is no longer able to move or speak, relying on his granddaughter Valentine to recite the alphabet and flip through a dictionary to find the letters and words he requires. With this rudimentary form of communication, the determined old man manages to save Valentine from being poisoned by her stepmother and thwart his son’s attempts to marry her off against her will. Dumas’s portrayal of this catastrophic condition — where, as he puts it, “the soul is trapped in a body that no longer obeys its commands” — is one of the earliest descriptions of locked-in syndrome. This form of profound paralysis occurs when the brain stem is damaged, usually because of a stroke but also as the result of tumors, traumatic brain injury, snakebite, substance abuse, infection or neurodegenerative diseases like amyotrophic lateral sclerosis (ALS). The condition is thought to be rare, though just how rare is hard to say. Many locked-in patients can communicate through purposeful eye movements and blinking, but others can become completely immobile, losing their ability even to move their eyeballs or eyelids, rendering the command “blink twice if you understand me” moot. As a result, patients can spend an average of 79 days imprisoned in a motionless body, conscious but unable to communicate, before they are properly diagnosed. The advent of brain-machine interfaces has fostered hopes of restoring communication to people in this locked-in state, enabling them to reconnect with the outside world. These technologies typically use an implanted device to record the brain waves associated with speech and then use computer algorithms to translate the intended messages. The most exciting advances require no blinking, eye tracking or attempted vocalizations, but instead capture and convey the letters or words a person says silently in their head. © 2023 Annual Reviews

Keyword: Brain imaging; Language
Link ID: 28791 - Posted: 05.21.2023

By Jaya Padmanabhan Speaking two languages provides the enviable ability to make friends in unusual places. A new study suggests that bilingualism may also come with another benefit: improved memory in later life. Studying hundreds of older patients, researchers in Germany found that those who reported using two languages daily from a young age scored higher on tests of learning, memory, language and self-control than patients who spoke only one language. The findings, published in the April issue of the journal Neurobiology of Aging, add to two decades of work suggesting that bilingualism protects against dementia and cognitive decline in older people. “It’s promising that they report that early and middle-life bilingualism has a beneficial effect on cognitive health in later life,” said Miguel Arce Rentería, a neuropsychologist at Columbia University who was not involved in the study. “This would line up with the existing literature.” In recent years, scientists have gained a greater understanding of bilingualism and the aging brain, though not all their findings have aligned. Some have found that if people who have fluency in two languages develop dementia, they’ll develop it at a later age than people who speak one language. But other research has shown no clear benefit from bilingualism. Neuroscientists hypothesize that because bilingual people switch fluidly between two languages, they may be able to deploy similar strategies in other skills — such as multitasking, managing emotions and self-control — that help delay dementia later on. The new study tested 746 people age 59 to 76. Roughly 40 percent of the volunteers had no memory problems, while the others were patients at memory clinics and had experienced confusion or memory loss. © 2023 The New York Times Company

Keyword: Alzheimers; Language
Link ID: 28761 - Posted: 04.29.2023

By R. Douglas Fields Dazzling intricacies of brain structure are revealed every day, but one of the most obvious aspects of brain wiring eludes neuroscientists. The nervous system is cross-wired, so that the left side of the brain controls the right half of the body and vice versa. Every doctor relies upon this fact in performing neurological exams, but when I asked my doctor last week why this should be, all I got was a shrug. So I asked Catherine Carr, a neuroscientist at the University of Maryland, College Park. “No good answer,” she replied. I was surprised — such a fundamental aspect of how our brain and body are wired together, and no one knew why? Nothing that we know of stops the right side of the brain from connecting with the right side of the body. That wiring scheme would seem much simpler and less prone to errors. In the embryonic brain, the crossing of the wires across the midline — an imaginary line dividing the right and left halves of the body — requires a kind of molecular “traffic cop” to somehow direct the growing nerve fibers to the right spot on the opposite side of the body. Far simpler just to keep things on the same side. Yet this neural cross wiring is ubiquitous in the animal kingdom — even the neural connections in lowly nematode worms are wired with left-right reversal across the animal’s midline. And many of the traffic cop molecules that direct the growth of neurons in these worms do the same in humans. For evolution to have conserved this arrangement so doggedly, surely there’s some benefit to it, but biologists still aren’t certain what it is. An intriguing answer, however, has come from the world of mathematics. The key to that solution lies in exactly how neural circuits are laid out within brain tissue. Neurons that make connections between the brain and the body are organized to create a virtual map in the cerebral cortex. If a neuroscientist sticks an electrode into the brain and finds that neurons there receive input from the thumb, for example, then neurons next to it in the cerebral cortex will connect to the index finger. This mapping phenomenon is called somatotopy, Greek for “body mapping,” but it’s not limited to the physical body. The 3D external world we perceive through vision and our other senses is mapped onto the brain in the same way. All Rights Reserved © 2023

Keyword: Laterality; Development of the Brain
Link ID: 28749 - Posted: 04.22.2023

Suzana Herculano-Houzel Neuroscientists have long assumed that neurons are greedy, hungry units that demand more energy when they become more active, and the circulatory system complies by providing as much blood as they require to fuel their activity. Indeed, as neuronal activity increases in response to a task, blood flow to that part of the brain increases even more than its rate of energy use, leading to a surplus. This increase is the basis of common functional imaging technology that generates colored maps of brain activity. Scientists used to interpret this apparent mismatch in blood flow and energy demand as evidence that there is no shortage of blood supply to the brain. The idea of a nonlimited supply was based on the observation that only about 40% of the oxygen delivered to each part of the brain is used – and this percentage actually drops as parts of the brain become more active. It seemed to make evolutionary sense: The brain would have evolved this faster-than-needed increase in blood flow as a safety feature that guarantees sufficient oxygen delivery at all times. Functional magnetic resonance imaging is one of several ways to measure the brain. But does blood distribution in the brain actually support a demand-based system? As a neuroscientist myself, I had previously examined a number of other assumptions about the most basic facts about brains and found that they didn’t pan out. To name a few: Human brains don’t have 100 billion neurons, though they do have the most cortical neurons of any species; the degree of folding of the cerebral cortex does not indicate how many neurons are present; and it’s not larger animals that live longer, but those with more neurons in their cortex. I believe that figuring out what determines blood supply to the brain is essential to understanding how brains work in health and disease. It’s like how cities need to figure out whether the current electrical grid will be enough to support a future population increase. Brains, like cities, only work if they have enough energy supplied. © 2010–2023, The Conversation US, Inc.

Keyword: Stroke; Brain imaging
Link ID: 28726 - Posted: 04.01.2023

By Mark Johnson Archaeologists excavating the ancient city of Megiddo in modern-day Israel have discovered a window into medicine’s ancient past: the 3,500-year-old bones of two brothers, both bearing signs of an infectious disease, and one scarred from cranial surgery that may have been an attempt to treat the illness. A recent paper in the journal PLOS ONE describes the discovery, which is one of the region’s earliest examples of a widely practiced type of surgery that creates an opening in the skull. The work should help scientists and anthropologists understand how surgeries developed and became more effective over time. The procedure, known as cranial trephination, was performed thousands of years ago in different parts of the world, including Europe, Africa, China and South America. A 2020 paper listed trephination as one of “the first three procedures that marked the dawn of surgery,” along with circumcision and bladder stone removal. Versions of the procedure, called either a craniotomy or craniectomy, are still practiced today “as emergency treatment for brain swelling, bleeding, as well as for surgeries to treat epilepsy and to remove some tumors,” said John Verano, a professor of anthropology at Tulane University, who described the new paper as an interesting case report. Although the electric drills used today are a far cry from the handheld flints and metal tools used thousands of years ago, the objective — making a hole in the skull — is the same. However, Verano stressed that trephination was not brain surgery. “They were careful not to cut through the membrane protecting the brain, which would lead to meningitis and death if not done under strictly sterile conditions,” he said. Archaeologists and anthropologists cannot be certain what conditions ancient healers were treating by cutting into the skull, but most speculation centers on serious head injuries. Other possibilities include epilepsy, mental illness and swelling in the brain.

Keyword: Brain Injury/Concussion
Link ID: 28702 - Posted: 03.15.2023

By Susan Milius In a castaway test setup, groups of young honeybees figuring out how to forage on their own start waggle dancing spontaneously — but badly. Waggling matters. A honeybee’s rump-shimmy runs and turning loops encode clues that help her colony mates fly to food she has found, sometimes kilometers away. However, five colonies in the new test had no older sisters or half-sisters around as role models for getting the dance moves right. Still, dances improved in some ways as the youngsters wiggled and looped day after day, reports behavioral ecologist James Nieh of the University of California, San Diego. But when waggling the clues for distance information, Apis mellifera without role models never did match the timing and coding in normal colonies where young bees practiced with older foragers before doing the main waggle themselves. The youngsters-only colonies thus show that social learning, or the lack of it, matters for communicating by dance among honeybees, Nieh and an international team of colleagues say in the March 10 Science. Bee waggle dancing, a sort of language, turns out to be both innate and learned, like songbird or human communication. The dance may appear simple in a diagram, but executing it on expanses of honeycomb cells gets challenging. Bees are “running forward at over one body length per second in the pitch black trying to keep the correct angle, surrounded by hundreds of bees that are crowding them,” Nieh says. Beekeepers and biologists know that some kinds of bees can learn from others of their kind — some bumblebees even tried soccer (SN: 2/23/17). But when it comes to waggle dancing, “I think people have assumed it’s genetic,” Nieh says. That would make this fancy footwork more like the chatty but innate communications of cuttlefish color change, for instance. The lab bee-castaway experiments instead show a nonhuman example of “social learning for sophisticated communication,” Nieh says. © Society for Science & the Public 2000–2023.

Keyword: Animal Communication; Evolution
Link ID: 28695 - Posted: 03.11.2023

By Eva Holland Kris Walterson doesn’t remember exactly how he got to the bathroom, very early on a Friday morning — only that once he got himself there, his feet would no longer obey him. He crouched down and tried to lift them up with his hands before sliding to the floor. He didn’t feel panicked about the problem, or even nervous really. But when he tried to get up, he kept falling down again: slamming his back against the bathtub, making a racket of cabinet doors. It didn’t make sense to him then, why his legs wouldn’t lock into place underneath him. He had a pair of fuzzy socks on, and he tried pulling them off, thinking that bare feet might get better traction on the bathroom floor. That didn’t work, either. When his mother came from her bedroom to investigate the noise, he tried to tell her that he couldn’t stand, that he needed her help. But he couldn’t seem to make her understand, and instead of hauling him up she called 911. After he was loaded into an ambulance at his home in Calgary, Alberta, a paramedic warned him that he would soon hear the sirens, and he did. The sound is one of the last things he remembers from that morning. Walterson, who was 60, was experiencing a severe ischemic stroke — the type of stroke caused by a blockage, usually a blood clot, in a blood vessel of the brain. The ischemic variety represents roughly 85 percent of all strokes. The other type, hemorrhagic stroke, is a yin to the ischemic yang: While a blockage prevents blood flow to portions of the brain, starving it of oxygen, a hemorrhage means blood is unleashed, flowing when and where it shouldn’t. In both cases, too much blood or too little, a result is the rapid death of the affected brain cells. When Walterson arrived at Foothills Medical Center, a large hospital in Calgary, he was rushed to the imaging department, where CT scans confirmed the existence and location of the clot. It was an M1 occlusion, meaning a blockage in the first and largest branch of his middle cerebral artery. © 2023 The New York Times Company

Keyword: Stroke
Link ID: 28688 - Posted: 03.04.2023

By Sam Jones Dolphins, pilot whales and sperm whales use echolocation clicks to hunt and subdue their prey. But the animals, known as toothed whales, also produce other sounds for social communication, like grunts and high-pitched whistles. For decades, scientists speculated that something in the nasal cavity was responsible for this range of sounds, but the mechanics were unclear. Now, researchers have uncovered how structures in the nose, called phonic lips, allow toothed whales to produce sounds at different registers, similar to the way the human voice functions, all while conserving air deep beneath the ocean’s surface. And the animals use the vocal fry register for echolocation. Yes, that vocal fryyyy. The work was published in the journal Science on Thursday. Bottlenose Sounds A sequence of vocal registers from a bottlenose dolphin: Echolocation clicks made with vocal fry; the bursts of standard vocalization; and whistles. Studying the structures responsible for whale sound production has been no small task. Over the last few decades, “there was a lot of circumstantial evidence — people filming things with X-rays or triangulating sound with different hydrophones,” said Coen P.H. Elemans, a biologist at the University of Southern Denmark. Taking a new approach, Dr. Elemans and colleagues inserted endoscopes into the nasal cavities of trained Atlantic bottlenose dolphins and harbor porpoises to get high-speed footage during sound production. They found that sound was indeed being produced in the nose. But to confirm that the phonic lips were involved — and to see if their movement was driven by muscles or by airflow — they created an experimental setup with deceased (beached or bycatch) harbor porpoises, filming the phonic lips as air was pushed through the nasal complex. They saw that the phonic lips would briefly separate and then collide back together, causing a tissue vibration that would release sound into the surrounding water. But relying on air-driven sound production would not seem to be the best idea if your food is in the murky deep. “One thousand meters down, you have 1 percent of the air you had at the surface,” said Peter Madsen, a zoophysiologist at Aarhus University in Denmark, who has been tagging toothed whales for decades and is a co-author of the study. “To me, it’s always been super provocative to see a sperm whale or beaked whale or pilot whale dive deep, clicking happily, while having the knowledge in the back of my head that they’re supposed to use air for this.” © 2023 The New York Times Company

Keyword: Animal Communication; Evolution
Link ID: 28686 - Posted: 03.04.2023

By Darren Incorvaia Sitting in an exam room, surrounded by doctors and scientists, Heather Rendulic opened her left hand for the first time since suffering a series of strokes nine years earlier when she was in her early 20s. “It was an amazing feeling for me to be able to do that again,” Rendulic says. “It’s not something I ever thought was possible.” But immediately after a surgically implanted device sent electrical pulses into her spinal cord, Rendulic could not only open her hand but also showed other marked improvements in arm mobility, researchers report February 20 in Nature Medicine. “We all started crying,” Marco Capogrosso, a neuroscientist at the University of Pittsburgh, said in a February 15 news conference. “We didn’t really expect this could work as fast as that.” The approach is similar to that recently used for patients paralyzed by spinal cord injuries (SN: 08/03/22). It represents a promising new technique for restoring voluntary movement to those left with upper-body paralysis following strokes, the team says. A stroke occurs when blood supply to parts of the brain is cut off, often causing short-term or long-term issues with movement, speech and vision. Stroke is a leading, and often underappreciated, cause of paralysis; in the United States alone, 5 million people are living with some form of motor deficit due to stroke. While physical therapy can provide some improvements, no treatment exists to help these patients regain full control of their limbs — and their lives. Strokes cause paralysis because the connection between the brain and the spinal cord is damaged; the brain tries to tell the spinal cord to move certain muscles, but the message is muddled. © Society for Science & the Public 2000–2023.

Keyword: Stroke; Robotics
Link ID: 28678 - Posted: 02.22.2023

By Dani Blum The family of Bruce Willis announced that the actor has frontotemporal dementia, known as FTD, a form of dementia that occurs most commonly when nerve cells in the frontal and temporal lobes of the brain decrease in number. Mr. Willis, 67, was previously diagnosed with aphasia, which prompted him to retire from acting. “FTD is a cruel disease that many of us have never heard of and can strike anyone,” the family wrote in a statement. There are two main variants of FTD: primary progressive aphasia, which hampers a patient’s ability to communicate, and behavioral variant frontotemporal dementia, which manifests as personality and behavioral changes. “It hits the parts of the brain that make us the most human,” said Dr. Bruce Miller, a professor of neurology at the University of California, San Francisco. FTD is the most common cause of dementia for people under the age of 60, said Susan Dickinson, the chief executive of the Association for Frontotemporal Degeneration. There are roughly 50,000 people in the United States with a diagnosis of FTD, she added, although many experts consider that number to be a vast undercount, because of how challenging it can be to diagnose. There is no blood test or single biomarker to diagnose the condition — doctors instead identify it based on symptoms and neuroimaging. On average, it takes patients more than three years to get an accurate diagnosis, Ms. Dickinson said. People with primary progressive aphasia may struggle to speak in full sentences or have difficulty comprehending conversations. They may have a hard time writing or reading. Those with the behavioral variant of FTD may act out of character, said Dr. Ian Grant, an assistant professor of neurology at the Northwestern University Feinberg School of Medicine. Families will say that patients “seem like they’ve lost a little bit of their filter,” he said. Someone who is typically quiet and reserved may start spewing profanities, for example, or loudly comment on a stranger’s appearance. The person may act apathetic, Dr. Miller said, losing motivation. Some may also display a lack of empathy for those around them. © 2023 The New York Times Company

Keyword: Alzheimers; Language
Link ID: 28675 - Posted: 02.18.2023