Chapter 15. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 2699

By Emily Anthes My cat is a bona fide chatterbox. Momo will meow when she is hungry and when she is full, when she wants to be picked up and when she wants to be put down, when I leave the room or when I enter it, or sometimes for what appears to be no real reason at all. But because she is a cat, she is also uncooperative. So the moment I downloaded MeowTalk Cat Translator, a mobile app that promised to convert Momo’s meows into plain English, she clammed right up. For two days I tried, and failed, to solicit a sound. On Day 3, out of desperation, I decided to pick her up while she was wolfing down her dinner, an interruption guaranteed to elicit a howl of protest. Right on cue, Momo wailed. The app processed the sound, then played an advertisement for Sara Lee, then rendered a translation: “I’m happy!” I was dubious. But MeowTalk provided a more plausible translation about a week later, when I returned from a four-day trip. Upon seeing me, Momo meowed and then purred. “Nice to see you,” the app translated. Then: “Let me rest.” (The ads disappeared after I upgraded to a premium account.) The urge to converse with animals is age-old, long predating the time when smartphones became our best friends. Scientists have taught sign language to great apes, chatted with grey parrots and even tried to teach English to bottlenose dolphins. Pets — with which we share our homes but not a common language — are particularly tempting targets. My TikTok feed brims with videos of Bunny, a sheepadoodle who has learned to press sound buttons that play prerecorded phrases like “outside,” “scritches” and “love you.” MeowTalk is the product of a growing interest in enlisting additional intelligences — machine-learning algorithms — to decode animal communication. The idea is not as far-fetched as it may seem. For example, machine-learning systems, which are able to extract patterns from large data sets, can distinguish between the squeaks that rodents make when they are happy and those that they emit when they are in distress. Applying the same advances to our creature companions has obvious appeal. “We’re trying to understand what cats are saying and give them a voice” Javier Sanchez, a founder of MeowTalk, said. “We want to use this to help people build better and stronger relationships with their cats,” he added. © 2022 The New York Times Company

Keyword: Animal Communication; Learning & Memory
Link ID: 28458 - Posted: 08.31.2022

By Carl Zimmer One of the most remarkable things about our species is how fast human culture can change. New words can spread from continent to continent, while technologies such as cellphones and drones change the way people live around the world. It turns out that humpback whales have their own long-range, high-speed cultural evolution, and they don’t need the internet or satellites to keep it running. In a study published on Tuesday, scientists found that humpback songs easily spread from one population to another across the Pacific Ocean. It can take just a couple of years for a song to move several thousand miles. Ellen Garland, a marine biologist at the University of St. Andrews in Scotland and an author of the study, said she was shocked to find whales in Australia passing their songs to others in French Polynesia, which in turn gave songs to whales in Ecuador. “Half the globe is now vocally connected for whales,” she said. “And that’s insane.” It’s even possible that the songs travel around the entire Southern Hemisphere. Preliminary studies by other scientists are revealing whales in the Atlantic Ocean picking up songs from whales the eastern Pacific. Each population of humpback whales spends the winter in the same breeding grounds. The males there sing loud underwater songs that can last up to half an hour. Males in the same breeding ground sing a nearly identical tune. And from one year to the next, the population’s song gradually evolves into a new melody. Dr. Garland and other researchers have uncovered a complex, language-like structure in these songs. The whales combine short sounds, which scientists call units, into phrases. They then combine the phrases into themes. And each song is made of several themes. © 2022 The New York Times Company

Keyword: Animal Communication; Language
Link ID: 28456 - Posted: 08.31.2022

Jason Bruck Bottlenose dolphins’ signature whistles just passed an important test in animal psychology. A new study by my colleagues and me has shown that these animals may use their whistles as namelike concepts. By presenting urine and the sounds of signature whistles to dolphins, my colleagues Vincent Janik, Sam Walmsey and I recently showed that these whistles act as representations of the individuals who own them, similar to human names. For behavioral biologists like us, this is an incredibly exciting result. It is the first time this type of representational naming has been found in any other animal aside from humans. When you hear your friend’s name, you probably picture their face. Likewise, when you smell a friend’s perfume, that can also elicit an image of the friend. This is because humans build mental pictures of each other using more than just one sense. All of the different information from your senses that is associated with a person converges to form a mental representation of that individual - a name with a face, a smell and many other sensory characteristics. Within the first few months of life, dolphins invent their own specific identity calls – called signature whistles. Dolphins often announce their location to or greet other individuals in a pod by sending out their own signature whistles. But researchers have not known if, when a dolphin hears the signature whistle of a dolphin they are familiar with, they actively picture the calling individual. My colleagues and I were interested in determining if dolphin calls are representational in the same way human names invoke many thoughts of an individual. Because dolphins cannot smell, they rely principally on signature whistles to identify each other in the ocean. Dolphins can also copy another dolphin’s whistles as a way to address each other. My previous research showed that dolphins have great memory for each other’s whistles, but scientists argued that a dolphin might hear a whistle, know it sounds familiar, but not remember who the whistle belongs to. My colleagues and I wanted to determine if dolphins could associate signature whistles with the specific owner of that whistle. This would address whether or not dolphins remember and hold representations of other dolphins in their minds. © 2010–2022, The Conversation US, Inc.

Keyword: Language; Evolution
Link ID: 28441 - Posted: 08.24.2022

A study funded by the National Institutes of Health found that biomarkers present in the blood on the day of a traumatic brain injury (TBI) can accurately predict a patient’s risk of death or severe disability six months later. Measuring these biomarkers may enable a more accurate assessment of patient prognosis following TBI, according to results published today in Lancet Neurology. Researchers with the Transforming Research and Clinical Knowledge in TBI (TRACK-TBI(link is external)) study examined levels of glial fibrillary acidic protein (GFAP) and ubiquitin carboxy-terminal hydrolase L1 (UCH-L1)—proteins found in glial cells and neurons, respectively—in nearly 1,700 patients with TBI. TRACK-TBI is an observational study aimed at improving understanding and diagnosis of TBIs to develop successful treatments. The study team measured the biomarkers in blood samples taken from patients with TBI on the day of their injury and then evaluated their recovery six months later. Participants were recruited from 18 high-level trauma centers across the United States. More than half (57%) had suffered TBI as the result of a road traffic accident. The study showed that GFAP and UCH-L1 levels on the day of injury were strong predictors of death and unfavorable outcomes, such as vegetative state or severe disability requiring daily assistance to function. Those with biomarker levels among the highest fifth were at greatest risk of death in the six months post-TBI, with most occurring within the first month. GFAP and UCH-1 are currently used to aid in the detection of TBI. Elevated levels in the blood on the day of the TBI are linked to brain injury visible with neuroimaging. In 2018, the U.S. Food and Drug Administration approved use of these biomarkers to help clinicians decide whether to order a head CT scan to examine the brain after mild TBI.

Keyword: Brain Injury/Concussion
Link ID: 28433 - Posted: 08.13.2022

By Erin Blakemore There’s growing consensus on the danger of sport-related concussion — and how to treat athletes after head injuries. But the research at the heart of those recommendations has a fatal flaw, a new study suggests: It relies almost exclusively on male athletes. In a review in the British Journal of Sports Medicine, a national team of medical and concussion experts looked at 171 concussion studies cited by the three most influential consensus and position statements on sport-related concussion. These documents update professionals on how to treat athletes with concussions, providing important protocols for clinicians and setting the agenda for future research. Although the statements define the standard of care, the study suggests, they are based on data that largely excludes female athletes. Participants in the underlying studies were 80.1 percent male. Among the studies, 40.3 percent didn’t look at female athletes at all; only 25 percent of them had roughly equal male and female participation. Researchers said there could be several reasons for the disparity such as women’s historic exclusion from sports and professional sports organizations with no female counterpart. Women’s sports are underrepresented among groups that sponsor concussion research, they write. Bias in the sciences could have an effect, too: women are still underrepresented in both university faculties and scientific research. Because of the research gap, it isn’t yet clear whether females respond to concussions differently than males. Both sex and gender can cause medical conditions to develop — and be experienced, reported and treated — differently.

Keyword: Brain Injury/Concussion; Sexual Behavior
Link ID: 28432 - Posted: 08.13.2022

By Oliver Whang Read this sentence aloud, if you’re able. As you do, a cascade of motion begins, forcing air from your lungs through two muscles, which vibrate, sculpting sound waves that pass through your mouth and into the world. These muscles are called vocal cords, or vocal folds, and their vibrations form the foundations of the human voice. They also speak to the emergence and evolution of human language. For several years, a team of scientists based mainly in Japan used imaging technology to study the physiology of the throats of 43 species of primates, from baboons and orangutans to macaques and chimpanzees, as well as humans. All the species but one had a similar anatomical structure: an extra set of protruding muscles, called vocal membranes or vocal lips, just above the vocal cords. The exception was Homo sapiens. The researchers also found that the presence of vocal lips destabilized the other primates’ voices, rendering their tone and timbre more chaotic and unpredictable. Animals with vocal lips have a more grating, less controlled baseline of communication, the study found; humans, lacking the extra membranes, can exchange softer, more stable sounds. The findings were published on Thursday in the journal Science. “It’s an interesting little nuance, this change to the human condition,” said Drew Rendall, a biologist at the University of New Brunswick who was not involved in the research. “The addition, if you want to think of it this way, is actually a subtraction.” That many primates have vocal lips has long been known, but their role in communication has not been entirely clear. In 1984, Sugio Hayama, a biologist at Kyoto University, videotaped the inside of a chimpanzee’s throat to study its reflexes under anesthesia. The video also happened to capture a moment when the chimp woke and began hollering, softly at first, then with more power. Decades later, Takeshi Nishimura, a former student of Dr. Hayama and now a biologist at Kyoto University and the principal investigator of the recent research, studied the footage with renewed interest. He found that the chimp’s vocal lips and vocal cords were vibrating together, which added a layer of mechanical complexity to the chimp’s voice that made it difficult to fine-tune. © 2022 The New York Times Company

Keyword: Language; Evolution
Link ID: 28426 - Posted: 08.11.2022

Anthony Hannan In a recent interview, Game of Thrones star Emilia Clarke spoke about being able to live “completely normally” after two aneurysms – one in 2011 and one in 2013 – that caused brain injury. She went on to have two brain surgeries. An aneurysm is a bulge or ballooning in the wall of a blood vessel, often accompanied by severe headache or pain. So how can people survive and thrive despite having, as Clarke put it, “quite a bit missing” from their brain? The key to understanding how brains can recover from trauma is that they are fantastically plastic – meaning our body’s supercomputer can reshape and remodel itself. Brains can adapt and change in incredible ways. Yours is doing it right now as you form new memories. It’s not that the brain has evolved to deal with brain trauma or stroke or aneurysms; our ancestors normally died when that happened and may not have gone on to reproduce. In fact, we evolved very thick skulls to try to prevent brain trauma happening at all. No, this neural plasticity is a result of our brains evolving to be learning machines. They allow us to adapt to changing environments, to facilitate learning, memory and flexibility. This functionality also means the brain can adapt after certain injuries, finding new pathways to function. A lot of organs wouldn’t recover at all after serious damage. But the brain keeps developing through life. At a microscopic level, you’re changing the brain to make new memories every day.

Keyword: Development of the Brain; Regeneration
Link ID: 28404 - Posted: 07.22.2022

By Sam Jones Watching a woodpecker repeatedly smash its face into a tree, it’s hard not to wonder how its brain stays intact. For years, the prevailing theory has been that structures in and around a woodpecker’s skull absorb the shocks created during pecking. “Blogs and information panels at zoos all present this as fact — that shock absorption is occurring in woodpeckers,” said Sam Van Wassenbergh, a biologist at the University of Antwerp. Woodpeckers have even inspired the engineering of shock-absorbing materials and gear, like football helmets. But now, after analyzing high-speed footage of woodpeckers in action, Dr. Van Wassenbergh and colleagues are challenging this long-held belief. They discovered that woodpeckers are not absorbing shocks during pecking and they likely aren’t being concussed by using their heads like hammers. Their work was published in Current Biology on Thursday. When a woodpecker slams its beak into a tree, it generates a shock. If something in a woodpecker’s skull were absorbing these shocks before they reached the brain — the way a car’s airbag absorbs shocks in an accident before they reach a passenger — then, on impact, a woodpecker’s head would decelerate more slowly compared with its beak. With this in mind, the researchers analyzed high-speed videos of six woodpeckers (three species, two birds each) hammering away into a tree. They tracked two points on each bird’s beak and one point on its eye to mark its brain’s location. They found that the eye decelerated at the same rate as the beak and, in a couple of cases, even more quickly, which meant that — at the very least — the woodpecker was not absorbing any shock during pecking. © 2022 The New York Times Company

Keyword: Brain Injury/Concussion; Evolution
Link ID: 28403 - Posted: 07.16.2022

By Nikk Ogasa A flexible sensor applied to the back of the neck could help researchers detect whiplash-induced concussions in athletes. The sensor, described June 23 in Scientific Reports, is about the size of a bandage and is sleeker and more accurate than some instruments currently in use, says electrical engineer Nelson Sepúlveda of Michigan State University in East Lansing. “My hope is that it will lead to earlier diagnosis of concussions.” Bulky accelerometers in helmets are sometimes used to monitor for concussion in football players. But since the devices are not attached directly to athletes’ bodies, the sensors are prone to false readings from sliding helmets. Sepúlveda and colleagues’ patch adheres to the nape. It is made of two electrodes on an almost paper-thin piece of piezoelectric film, which generates an electric charge when stretched or compressed. When the head and neck move, the patch transmits electrical pulses to a computer. Researchers can analyze those signals to assess sudden movements that can cause concussion. The team tried out the patch on the neck of a human test dummy, dropping the figure from a height of about 60 centimeters. Researchers also packed the dummy’s head with different sensors to provide a baseline level of neck strain. Data from the patch aligned with data gathered by the internal sensors more than 90 percent of the time, Sepúlveda and colleagues found. The researchers are now working on incorporating a wireless transmitter into the patch for an even more streamlined design. © Society for Science & the Public 2000–2022.

Keyword: Brain Injury/Concussion
Link ID: 28383 - Posted: 06.30.2022

By Benjamin Mueller Taking a scan of an injured brain often produces a map of irretrievable losses, revealing spots where damage causes memory difficulties or tremors. But in rare cases, those scans can expose just the opposite: plots of brain regions where an injury miraculously relieves someone’s symptoms, offering clues about how doctors might accomplish the same. A team of researchers has now taken a fresh look at a set of such brain images, drawn from cigarette smokers addicted to nicotine in whom strokes or other injuries spontaneously helped them quit. The results, the scientists said, showed a network of interconnected brain regions that they believe underpins addiction-related disorders affecting potentially tens of millions of Americans. The study, published in the scientific journal Nature Medicine on Monday, supports an idea that has recently gained traction: that addiction lives not in one brain region or another, but rather in a circuit of regions linked by threadlike nerve fibers. The results may provide a clearer set of targets for addiction treatments that deliver electrical pulses to the brain, new techniques that have shown promise in helping people quit smoking. “One of the biggest problems in addiction is that we don’t really know where in the brain the main problem lies that we should target with treatment,” said Dr. Juho Joutsa, one of the study’s lead authors and a neurologist at the University of Turku in Finland. “We are hoping that after this, we have a very good idea of those regions and networks.” Research over the last two decades has solidified the idea that addiction is a disease of the brain. But many people still believe that addiction is voluntary. © 2022 The New York Times Company

Keyword: Drug Abuse; Stroke
Link ID: 28371 - Posted: 06.14.2022

Daniel Lavelle With ADHD, thoughts and impulses intrude on my focus like burglars trying to break into a house. Sometimes these crooks carefully pick the backdoor lock before they silently enter and pilfer all the silverware. At other times, stealth goes out of the window; they’re kicking through the front door and taking whatever they like. Either way, I was supposed to be reading a book just now, but all I can think about is how great it would be if I waded into a river to save a litter of kittens from tumbling down a waterfall just in the nick of time. I’ve got the kittens in my hand, and the crowd has gone wild; the spectres of Gandhi, Churchill and Obi-Wan Kenobi hover over the riverbank, nodding their approval while fireworks crackle overhead … I snap back and realise I’ve read three pages, only I don’t remember a single line. I reread the same pages, but the same thing happens, only now I’m so hung up on concentrating that another fantasy has hijacked my attention. This time I’m imagining that I’m super-focused, so focused that Manchester United have called and told me they want me to be their special penalty taker. These Walter Mitty, borderline narcissistic episodes persist for a while until I give up and go and be distracted somewhere else. Advertisement Unfortunately, I don’t take Ritalin, a stimulant prescribed to daydreamers like me, so when it comes to focusing I need all the help I can get. Enter Swiss developer and typographic designer Renato Casutt, who has spent six years trying to develop a typographical trick that helps people read more quickly and efficiently. “Bionic reading” is a font people can use on their devices via apps for iPhone and other Apple products. It works by highlighting a limited number of letters in a word in bold, and allowing your brain – or, more specifically, your memory – to fill in the rest. © 2022 Guardian News & Media Limited

Keyword: ADHD; Dyslexia
Link ID: 28358 - Posted: 06.07.2022

ByVirginia Morell Babies don’t babble to sound cute—they’re taking their first steps on the path to learning language. Now, a study shows parrot chicks do the same. Although the behavior has been seen in songbirds and two mammalian species, finding it in these birds is important, experts say, as they may provide the best nonhuman model for studying how we begin to learn language. The find is “exciting,” says Irene Pepperberg, a comparative psychologist at Hunter College not involved with the work. Pepperberg herself discovered something like babbling in a famed African gray parrot named Alex, which she studied for more than 30 years. By unearthing the same thing in another parrot species and in the wild, she says, the team has shown this ability is widespread in the birds. In this study, the scientists focused on green-rumped parrotlets (Forpus passerinus)—a smaller species than Alex, found from Venezuela to Brazil. The team investigated a population at Venezuela’s Hato Masaguaral research center, where scientists maintain more than 100 artificial nesting boxes. Like other parrots, songbirds, and humans (and a few other mammal species), parrotlets are vocal learners. They master their calls by listening and mimicking what they hear. The chicks in the new study started to babble at 21 days, according to camcorders installed in a dozen of their nests. They increased the complexity of their sounds dramatically over the next week, the scientists report today in the Proceedings of the Royal Society B. The baby birds uttered strings of soft peeps, clicks, and grrs, but they weren’t communicating with their siblings or parents, says lead author Rory Eggleston, a Ph.D. student at Utah State University. Rather, like a human infant babbling quietly in their crib, a parrotlet chick made the sounds alone (see video). Indeed, most chicks started their babbling bouts when their siblings were asleep, often doing so without even opening their beaks, says Eggleston, who spent hours analyzing videos of the birds. © 2022 American Association for the Advancement of Science.

Keyword: Language; Animal Communication
Link ID: 28343 - Posted: 06.01.2022

By Laura Sanders Punishing headbutts damage the brains of musk oxen. That observation, made for the first time and reported May 17 in Acta Neuropathologica, suggests that a life full of bell-ringing clashes is not without consequences, even in animals built to bash. Although a musk ox looks like a dirty dust mop on four tiny hooves, it’s formidable. When charging, it can reach speeds up to 60 kilometers an hour before ramming its head directly into an oncoming head. People expected that musk oxen brains could withstand these merciless forces largely unscathed, “that they were magically perfect,” says Nicole Ackermans of the Icahn School of Medicine at Mount Sinai in New York City. “No one actually checked.” In fact, the brains of three wild musk oxen (two females and one male) showed signs of extensive damage, Ackermans and her colleagues found. The damage was similar to what’s seen in people with chronic traumatic encephalopathy, a disorder known to be caused by repetitive head hits (SN: 12/13/17). In the musk ox brains, a form of a protein called tau had accumulated in patterns that suggested brain bashing was to blame. In an unexpected twist, the brains of the females, who hit heads less frequently than males, were worse off than the male’s. The male body, with its heavier skull, stronger neck muscles and forehead fat pads, may cushion the blows to the brain, the researchers suspect. The results may highlight an evolutionary balancing act; the animals can endure just enough brain damage to allow them to survive and procreate. High-level brainwork may not matter much, Ackermans says. “Their day-to-day life is not super complicated.” © Society for Science & the Public 2000–2022.

Keyword: Brain Injury/Concussion; Evolution
Link ID: 28341 - Posted: 05.28.2022

By Anna Gibbs Cradled inside the hushed world of the womb, fetuses might be preparing to come out howling. In the same way newborn humans can cry as soon as they’re born, common marmoset monkeys (Callithrix jacchus) produce contact calls to seek attention from their caregivers. Those vocalizations are not improv, researchers report in a preprint posted April 14 at bioRxiv. Ultrasound imaging of marmoset fetuses reveals that their mouths are already mimicking the distinctive pattern of movements used to emit their first calls, long before the production of sound. Early behaviors in infants are commonly described as “innate” or “hard-wired,” but a team at Princeton University wondered how exactly those behaviors develop. How does a baby know how to cry as soon as it’s born? The secret may lie in what’s happening before birth. “People tend to ignore the fetal period,” says Darshana Narayanan, a behavioral neuroscientist who did the research while at Princeton University. “They just think that it’s like the baby’s just vegetating and waiting to be born…. [But] that’s where many things begin.” Research shows, for instance, that chicks inside their eggs are already learning to identify their species’ call (SN: 9/16/21). “So much is developing so much earlier in development than we previously thought,” says developmental psychobiologist Samantha Carouso-Peck, executive director of Grassland Bird Trust in Fort Edward, N.Y., who was not involved in the research. But, she says, “we really haven’t looked much at all at the production side of this. Most of what we know is the auditory side.” Carouso-Peck studies vocal learning in songbirds and how it applies to how humans acquire language. © Society for Science & the Public 2000–2022.

Keyword: Animal Communication; Language
Link ID: 28325 - Posted: 05.11.2022

By Laura Sanders Young kids’ brains are especially tuned to their mothers’ voices. Teenagers’ brains, in their typical rebellious glory, are most decidedly not. That conclusion, described April 28 in the Journal of Neuroscience, may seem laughably obvious to parents of teenagers, including neuroscientist Daniel Abrams of Stanford University School of Medicine. “I have two teenaged boys myself, and it’s a kind of funny result,” he says. But the finding may reflect something much deeper than a punch line. As kids grow up and expand their social connections beyond their family, their brains need to be attuned to that growing world. “Just as an infant is tuned into a mom, adolescents have this whole other class of sounds and voices that they need to tune into,” Abrams says. He and his colleagues scanned the brains of 7- to 16-year-olds as they heard the voices of either their mothers or unfamiliar women. To simplify the experiment down to just the sound of a voice, the words were gibberish: teebudieshawlt, keebudieshawlt and peebudieshawlt. As the children and teenagers listened, certain parts of their brains became active. Previous experiments by Abrams and his colleagues have shown that certain regions of the brains of kids ages 7 to 12 — particularly those parts involved in detecting rewards and paying attention — respond more strongly to mom’s voice than to a voice of an unknown woman. “In adolescence, we show the exact opposite of that,” Abrams says. In these same brain regions in teens, unfamiliar voices elicited greater responses than the voices of their own dear mothers. The shift from mother to other seems to happen between ages 13 and 14. Society for Science & the Public 2000–2022.

Keyword: Language; Development of the Brain
Link ID: 28307 - Posted: 04.30.2022

By Katharine Q. Seelye Ursula Bellugi, a pioneer in the study of the biological foundations of language who was among the first to demonstrate that sign language was just as complex, abstract and systematic as spoken language, died on Sunday in San Diego. She was 91. Her death, at an assisted living facility, was confirmed by her son Rob Klima. Dr. Bellugi was a leading researcher at the Salk Institute for Biological Studies in San Diego for nearly five decades and, for much of that time, was director of its laboratory for cognitive neuroscience. She made significant contributions in three main areas: the development of language in children; the linguistic structure and neurological basis of American Sign Language; and the social behavior and language abilities of people with a rare genetic disorder, Williams syndrome. “She leaves an indelible legacy of shedding light on how humans communicate and socialize with each other,” Rusty Gage, president of the Salk Institute, said in a statement. Dr. Bellugi’s work, much of it done in collaboration with her husband, Edward S. Klima, advanced understanding of the brain and the origins of language, both signed and spoken. American Sign Language was first described as a true language in 1960 by William C. Stokoe Jr., a professor at Gallaudet University, the world’s only liberal arts university devoted to deaf people. But he was ridiculed and attacked for that claim. Dr. Bellugi and Dr. Klima, who died in 2008, demonstrated conclusively that the world’s signed languages — of which there are more than 100 — were actual languages in their own right, not just translations of spoken languages. Dr. Bellugi, who focused on American Sign Language, established that these linguistic systems were passed down, in all their complexity, from one generation of deaf people to the next. For that reason, the scientific community regards her as the founder of the neurobiology of American Sign Language. The couple’s work led to a major discovery at the Salk lab: that the left hemisphere of the brain has an innate predisposition for language, whether spoken or signed. That finding gave scientists fresh insight into how the brain learns, interprets and forgets language. © 2022 The New York Times Company

Keyword: Language; Laterality
Link ID: 28296 - Posted: 04.23.2022

Grace Browne In early February 2016, after reading an article featuring a couple of scientists at the Massachusetts Institute of Technology who were studying how the brain reacts to music, a woman felt inclined to email them. “I have an interesting brain,” she told them. EG, who has requested to go by her initials to protect her privacy, is missing her left temporal lobe, a part of the brain thought to be involved in language processing. EG, however, wasn’t quite the right fit for what the scientists were studying, so they referred her to Evelina Fedorenko, a cognitive neuroscientist, also at MIT, who studies language. It was the beginning of a fruitful relationship. The first paper based on EG’s brain was recently published in the journal Neuropsychologia, and Fedorenko’s team expects to publish several more. For EG, who is in her fifties and grew up in Connecticut, missing a large chunk of her brain has had surprisingly little effect on her life. She has a graduate degree, has enjoyed an impressive career, and speaks Russian—a second language–so well that she has dreamed in it. She first learned her brain was atypical in the autumn of 1987, at George Washington University Hospital, when she had it scanned for an unrelated reason. The cause was likely a stroke that happened when she was a baby; today, there is only cerebro-spinal fluid in that brain area. For the first decade after she found out, EG didn't tell anyone other than her parents and her two closest friends. “It creeped me out,” she says. Since then, she has told more people, but it's still a very small circle that is aware of her unique brain anatomy. © Condé Nast Britain 2022.

Keyword: Development of the Brain; Language
Link ID: 28295 - Posted: 04.20.2022

By Paula Span On a recent afternoon in Bastrop, Texas, Janet Splawn was walking her dog, Petunia, a Pomeranian-Chihuahua mix. She said something to her grandson, who lives with her and had accompanied her on the stroll. But he couldn’t follow; her speech had suddenly become incoherent. “It was garbled, like mush,” Ms. Splawn recalled a few days later from a hospital in Austin. “But I got mad at him for not understanding. It was kind of an eerie feeling.” People don’t take chances when 87-year-olds develop alarming symptoms. Her grandson drove her to the nearest hospital emergency room, which then transferred her to a larger hospital for a neurology consultation. The diagnosis: a transient ischemic attack, or T.I.A. For decades, patients have been relieved to hear that phrase. The sudden onset of symptoms like weakness or numbness (often on one side), loss of vision (often in one eye) and trouble with language (speaking, understanding or both) — if resolved in a few minutes — is considered “transient.” Whew. But in a recent editorial in JAMA, two neurologists called for doctors and patients to abandon the term transient ischemic attack. It’s too reassuring, they argued, and too likely to lead someone with passing symptoms to wait until the next morning to call a doctor or let a week go by before arranging an appointment. That’s dangerous. Better, they said, to call a T.I.A. what it is: a stroke. More specifically, a minor ischemic stroke. (Almost 90 percent of strokes, which afflict 795,000 Americans a year, are ischemic, meaning they result from a clot that reduces blood flow to the brain.) Until recently, T.I.A.s “were played down,” said Dr. J. Donald Easton, a neurologist recently retired from the University of California, San Francisco, and an author of the editorial. “The person thinks, ‘Oh, it’s over. It goes away, so all is well.’ But all is not well. There’s trouble to come, and it’s coming soon.” The advent of brain imaging — first CT scans in the late 1970s, then the more precise M.R.I.s in the 1990s — has shown that many T.I.A.s, sometimes called ministrokes, cause visible and permanent brain damage. © 2022 The New York Times Company

Keyword: Stroke
Link ID: 28275 - Posted: 04.09.2022

By Ingrid K. Williams This article is part of a limited series on artificial intelligence’s potential to solve everyday problems. Imagine a test as quick and easy as having your temperature taken or your blood pressure measured that could reliably identify an anxiety disorder or predict an impending depressive relapse. Health care providers have many tools to gauge a patient’s physical condition, yet no reliable biomarkers — objective indicators of medical states observed from outside the patient — for assessing mental health. But some artificial intelligence researchers now believe that the sound of your voice might be the key to understanding your mental state — and A.I. is perfectly suited to detect such changes, which are difficult, if not impossible, to perceive otherwise. The result is a set of apps and online tools designed to track your mental status, as well as programs that deliver real-time mental health assessments to telehealth and call-center providers. Psychologists have long known that certain mental health issues can be detected by listening not only to what a person says but how they say it, said Maria Espinola, a psychologist and assistant professor at the University of Cincinnati College of Medicine. With depressed patients, Dr. Espinola said, “their speech is generally more monotone, flatter and softer. They also have a reduced pitch range and lower volume. They take more pauses. They stop more often.” Patients with anxiety feel more tension in their bodies, which can also change the way their voice sounds, she said. “They tend to speak faster. They have more difficulty breathing.” Today, these types of vocal features are being leveraged by machine learning researchers to predict depression and anxiety, as well as other mental illnesses like schizophrenia and post-traumatic stress disorder. The use of deep-learning algorithms can uncover additional patterns and characteristics, as captured in short voice recordings, that might not be evident even to trained experts. © 2022 The New York Times Company

Keyword: Depression; Schizophrenia
Link ID: 28271 - Posted: 04.06.2022

ByTess Joosse My dog Leo clearly knows the difference between my voice and the barks of the beagle next door. When I speak, he looks at me with love; when our canine neighbor makes his mind known, Leo barks back with disdain. A new study backs up what I and my fellow dog owners have long suspected: Dogs’ brains process human and canine vocalizations differently, suggesting they evolved to recognize our voices from their own. “The fact that dogs use auditory information alone to distinguish between human and dog sound is significant,” says Jeffrey Katz, a cognitive neuroscientist at Auburn University who is not involved with the work. Previous research has found that dogs can match human voices with expressions. When played an audio clip of a lady laughing, for example, they’ll often look at a photo of a smiling woman. But how exactly the canine brain processes sounds isn’t clear. MRI has shown certain regions of the dog brain are more active when a pup hears another dog whine or bark. But those images can’t reveal exactly when neurons in the brain are firing, and whether they fire differently in response to different noises. So in the new study, Anna Bálint, a canine neuroscientist at Eötvös Loránd University, turned to an electroencephalogram, which can measure individual brain waves. She and her colleagues recruited 17 family dogs, including several border collies, golden retrievers, and a German shepherd, that were previously taught to lie still for several minutes at a time. The scientists attached electrodes to each dog’s head to record its brain response—not an easy task, it turns out. Unlike humans’ bony noggins, dog heads have lots of muscles that can obstruct a clear readout, Bálint says. © 2022 American Association for the Advancement of Science.

Keyword: Language; Animal Communication
Link ID: 28270 - Posted: 04.06.2022