Chapter 19. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 101 - 120 of 2699

Kate Wild “The skull acts as a bastion of privacy; the brain is the last private part of ourselves,” Australian neurosurgeon Tom Oxley says from New York. Oxley is the CEO of Synchron, a neurotechnology company born in Melbourne that has successfully trialled hi-tech brain implants that allow people to send emails and texts purely by thought. In July this year, it became the first company in the world, ahead of competitors like Elon Musk’s Neuralink, to gain approval from the US Food and Drug Administration (FDA) to conduct clinical trials of brain computer interfaces (BCIs) in humans in the US. Synchron has already successfully fed electrodes into paralysed patients’ brains via their blood vessels. The electrodes record brain activity and feed the data wirelessly to a computer, where it is interpreted and used as a set of commands, allowing the patients to send emails and texts. BCIs, which allow a person to control a device via a connection between their brain and a computer, are seen as a gamechanger for people with certain disabilities. “No one can see inside your brain,” Oxley says. “It’s only our mouths and bodies moving that tells people what’s inside our brain … For people who can’t do that, it’s a horrific situation. What we’re doing is trying to help them get what’s inside their skull out. We are totally focused on solving medical problems.” BCIs are one of a range of developing technologies centred on the brain. Brain stimulation is another, which delivers targeted electrical pulses to the brain and is used to treat cognitive disorders. Others, like imaging techniques fMRI and EEG, can monitor the brain in real time. “The potential of neuroscience to improve our lives is almost unlimited,” says David Grant, a senior research fellow at the University of Melbourne. “However, the level of intrusion that would be needed to realise those benefits … is profound”. © 2021 Guardian News & Media Limited

Keyword: Brain imaging; Language
Link ID: 28070 - Posted: 11.09.2021

Jon Hamilton Headaches, nausea, dizziness, and confusion are among the most common symptoms of a concussion. But researchers say a blow to the head can also make it hard to understand speech in a noisy room. "Making sense of sound is one of the hardest jobs that we ask our brains to do," says Nina Kraus, a professor of neurobiology at Northwestern University. "So you can imagine that a concussion, getting hit in the head, really does disrupt sound processing." About 15% to 20% of concussions cause persistent sound-processing difficulties, Kraus says, which suggests that hundreds of thousands of people are affected each year in the U.S. The problem is even more common in the military, where many of the troops who saw combat in Iraq and Afghanistan sustained concussions from roadside bombs. From ear to brain Our perception of sound starts with nerve cells in the inner ear that transform pressure waves into electrical signals, Kraus says. But it takes a lot of brain power to transform those signals into the auditory world we perceive. Article continues after sponsor message The brain needs to compare the signals from two ears to determine the source of a sound. Then it needs to keep track of changes in volume, pitch, timing and other characteristics. Kraus's lab, called Brainvolts, is conducting a five-year study of 500 elite college athletes to learn how a concussion can affect the brain's ability to process the huge amount of auditory information it receives. And she devotes an entire chapter to concussion in her 2021 book, Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World. © 2021 npr

Keyword: Brain Injury/Concussion; Hearing
Link ID: 28064 - Posted: 11.06.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

Keyword: Language; Hearing
Link ID: 28058 - Posted: 10.30.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds. Simons Foundation All Rights Reserved © 2021

Keyword: Language
Link ID: 28054 - Posted: 10.27.2021

Nicola Davis They have fluffy ears, a penetrating stare and a penchant for monogamy. But it turns out that indris – a large, critically endangered species of lemur – have an even more fascinating trait: an unexpected sense of rhythm. Indri indri are known for their distinctive singing, a sound not unlike a set of bagpipes being stepped on. The creatures often strike up a song with members of their family either in duets or choruses, featuring sounds from roars to wails. Now scientists say they have analysed the songs of 39 indris living in the rainforest of Madagascar, revealing that – like humans – the creatures employ what are known as categorical rhythms. These rhythms are essentially distinctive and predictable patterns of intervals between the onset of notes. For example in a 1:1 rhythm, all the intervals are of equal length, while a 1:2 rhythm has some twice as long as those before or after – like the opening bars of We Will Rock You by Queen. “They are quite predictable [patterns], because the next note is going to come either one unit or two whole units after the previous note,” said Dr Andrea Ravignani, co-author of the research from the Max Planck Institute for Psycholinguistics. While the 1:1 rhythms have previously been identified in certain songbirds, the team say their results are the first time categorical rhythms have been identified in a non-human mammal. “The evidence is even stronger than in birds,” said Ravignani. © 2021 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 28050 - Posted: 10.27.2021

ByRachel Fritts Across North America, hundreds of bird species waste time and energy raising chicks that aren’t their own. They’re the victims of a “brood parasite” called the cowbird, which adds its own egg to their clutch, tricking another species into raising its offspring. One target, the yellow warbler, has a special call to warn egg-warming females when cowbirds are casing the area. Now, researchers have found the females act on that warning 1 day later—suggesting their long-term memories might be much better than thought. “It’s a very sophisticated and subtle behavioral response,” says Erick Greene, a behavioral ecologist at the University of Montana, Missoula, who was not involved in the study. “Am I surprised? I guess I’m more in awe. It’s pretty dang cool.” Birds have been dazzling scientists with their intellects for decades. Western scrub jays, for instance, can remember where they’ve stored food for the winter—and can even keep track of when it will spoil. There’s evidence that other birds might have a similarly impressive ability to remember certain meaningful calls. “Animals are smart in the context in which they need to be smart,” says Mark Hauber, an animal behavior researcher at the University of Illinois, Urbana-Champaign (UIUC), and the Institute of Advanced Studies in Berlin, who co-authored the new study. He wanted to see whether yellow warblers had the capacity to remember their own important warning call known as a seet. Shelby Lawson The birds make the staccato sound of this call only when a cowbird is near. When yellow warbler females hear it, they go back to their nests and sit tight. (It could just as well be called a “seat” call.) But it’s been unclear whether they still remember the warning in the morning. © 2021 American Association for the Advancement of Science.

Keyword: Animal Communication; Learning & Memory
Link ID: 28039 - Posted: 10.16.2021

Linda Geddes Your dog might follow commands such as “sit”, or become uncontrollably excited at the mention of the word “walkies”, but when it comes to remembering the names of toys and other everyday items, most seem pretty absent-minded. Now a study of six “genius dogs” has advanced our understanding of dogs’ memories, suggesting some of them possess a remarkable grasp of the human language. Hungarian researchers spent more than two years scouring the globe for dogs who could recognise the names of their various toys. Although most can learn commands to some degree, learning the names of items appears to be a very different task, with most dogs unable to master this skill. Max (Hungary), Gaia (Brazil), Nalani (Netherlands), Squall (US), Whisky (Norway), and Rico (Spain) made the cut after proving they knew the names of more than 28 toys, with some knowing more than 100. They were then enlisted to take part in a series of livestreamed experiments known as the Genius Dog Challenge. “These gifted dogs can learn new names of toys in a remarkable speed,” said Dr Claudia Fugazza at Eötvös Loránd University in Budapest, who led the research team. “In our previous study we found that they could learn a new toy name after hearing it only four times. But, with such short exposure, they did not form a long-term memory of it.” To further push the dogs’ limits, their owners were tasked with teaching them the names of six, and then 12 new toys in a single week. © 2021 Guardian News & Media Limited

Keyword: Animal Communication; Language
Link ID: 28023 - Posted: 10.06.2021

By Jackie Rocheleau Elevated blood levels of a specific protein may help scientists predict who has a better chance of bouncing back from a traumatic brain injury. The protein, called neurofilament light or NfL for short, lends structural support to axons, the tendrils that send messages between brain cells. Levels of NfL peak on average at 10 times the typical level 20 days after injury and stay above normal a year later, researchers report September 29 in Science Translational Medicine. The higher the peak NfL blood concentrations after injury, the tougher the recovery for people with TBI six and 12 months later, shows the study of 197 people treated at eight trauma centers across Europe for moderate to severe TBI. Brain scans of 146 participants revealed that their peak NfL concentrations predicted the extent of brain shrinkage after six months, and axon damage at six and 12 months after injury, neurologist Neil Graham of Imperial College London and his colleagues found. These researchers also had a unique opportunity to check that the blood biomarker, which gives indirect clues about the brain injury, actually measured what was happening in the brain. In 18 of the participants that needed brain surgery, researchers sampled the fluid surrounding injured neurons. NfL concentrations there correlated with NfL concentrations in the blood. “The work shows that a new ultrasensitive blood test can be used to accurately diagnose traumatic brain injury,” says Graham. “This blood test can predict quite precisely who’s going to make a good recovery and who’s going to have more difficulties.” © Society for Science & the Public 2000–2021.

Keyword: Brain Injury/Concussion
Link ID: 28017 - Posted: 10.02.2021

By Sierra Carter Black women who have experienced more racism throughout their lives have stronger brain responses to threat, which may hurt their long-term health, according to a new study I conducted with clinical neuropsychologist Negar Fani and other colleagues. I am part of a research team that for more than 15 years has studied the ways stress related to trauma exposure can affect the mind and body. In our recent study, we took a closer look at a stressor that Black Americans disproportionately face in the United States: racism. My colleagues and I completed research with 55 Black women who reported how much they’d been exposed to traumatic experiences, such as childhood abuse and physical or sexual violence, and to racial discrimination, experiencing unfair treatment due to race or ethnicity. We asked them to focus on a task that required attention while simultaneously looking at stressful images. We used functional MRI to observe their brain activity during that time. We found that Black women who reported more experiences of racial discrimination had more response activity in brain regions that are associated with vigilance and watching out for threat — that is, the middle occipital cortex and ventromedial prefrontal cortex. Their reactions were above and beyond the response caused by traumatic experiences not related to racism. Our research suggests that racism had a traumalike effect on Black women’s health; being regularly attuned to the threat of racism can tax important body-regulation tools and worsen brain health.

Keyword: Stress; Brain Injury/Concussion
Link ID: 28015 - Posted: 10.02.2021

By Sam Roberts Washoe was 10 months old when her foster parents began teaching her to talk, and five months later they were already trumpeting her success. Not only had she learned words; she could also string them together, creating expressions like “water birds” when she saw a pair of swans and “open flower” to gain admittance to a garden. Washoe was a chimpanzee. She had been born in West Africa, probably orphaned when her mother was killed, sold to a dealer, flown to the United States for use of testing by the Air Force and adopted by R. Allen Gardner and his wife, Beatrix. She was raised as if she were a human child. She craved oatmeal with onions and pumpkin pudding. “The object of our research was to learn how much chimps are like humans,” Professor Gardner told Nevada Today, a University of Nevada publication, in 2007. “To measure this accurately, chimps would be needed to be raised as human children, and to do that, we needed to share a common language.” Washoe ultimately learned some 200 words, becoming what researchers said was the first nonhuman to communicate using sign language developed for the deaf. Professor Gardner, an ethologist who, with his wife, raised the chimpanzee for nearly five years, died on Aug. 20 at his ranch near Reno, Nev. He was 91. His death was announced by the University of Nevada, Reno, where he had joined the faculty in 1963 and conducted his research until he retired in 2010. When scientific journals reported in 1967 that Washoe (pronounced WA-sho), named after a county in Nevada, had learned to recognize and use multiple gestures and expressions in sign language, the news electrified the world of psychologists and ethologists who study animal behavior. © 2021 The New York Times Company

Keyword: Language; Evolution
Link ID: 28013 - Posted: 10.02.2021

By Jonathan Lambert Vampire bats may be bloodthirsty, but that doesn’t mean they can’t share a drink with friends. Fights can erupt among bats over gushing wounds bit into unsuspecting animals. But bats that have bonded while roosting often team up to drink blood away from home, researchers report September 23 in PLOS Biology. Vampire bats (Desmodus rotundus) can form long-term social bonds with each other through grooming, sharing regurgitated blood meals and generally hanging out together at the roost (SN: 10/31/19). But whether these friendships, which occur between both kin and nonkin, extend to the bats’ nightly hunting had been unclear. “They’re flying around out there, but we didn’t know if they were still interacting with each other,” says Gerald Carter, an evolutionary biologist at Ohio State University in Columbus. To find out, Carter and his colleague Simon Ripperger of the Museum für Naturkunde in Berlin, built on previous research that uncovered a colony’s social network using bat backpacks. Tiny computer sensors glued to 50 female bats in Tolé, Panama, continuously registered proximity to other sensors both within the roost and outside, revealing when bats met up while foraging. Two common vampire bats feed on a cow near La Chorrera, Panama. It can take 10 to 40 minutes for a bat to bite a small, diamond-shaped wound into an animal’s flesh, and fights can sometimes break out over access to wounds. But researchers found that bats who are friendly back at the roost likely feed together in the field, potentially saving time and energy. © Society for Science & the Public 2000–2021

Keyword: Evolution
Link ID: 28005 - Posted: 09.25.2021

Jon Hamilton People who have had a stroke appear to regain more hand and arm function if intensive rehabilitation starts two to three months after the injury to their brain. A study of 72 stroke patients suggests this is a "critical period," when the brain has the greatest capacity to rewire, a team reports in this week's journal PNAS. The finding challenges the current practice of beginning rehabilitation as soon as possible after a stroke and suggests intensive rehabilitation should go on longer than most insurance coverage allows, says Elissa Newport, a co-author of the study and director of the Center for Brain Plasticity and Recovery at Georgetown University Medical Center. Newport was speaking in place of the study's lead author, Dr. Alexander Dromerick, who died after the study was accepted but before it was published. If the results are confirmed with other larger studies, "the clinical protocol for the timing of stroke rehabilitation would be changed," says Li-Ru Zhao, a professor of neurosurgery at Upstate Medical University in Syracuse, N.Y., who was not involved in the research. The study involved patients treated at Medstar National Rehabilitation Hospital in Washington, D.C., most in their 50s and 60s. One of the study participants was Anthony McEachern, who was 45 when he had a stroke in 2017. Just a few hours earlier, McEachern had been imitating Michael Jackson dance moves with his kids. But at home that night he found himself unable stand up. © 2021 npr

Keyword: Stroke; Learning & Memory
Link ID: 28002 - Posted: 09.22.2021

Christie Wilcox If it walks like a duck and talks like a person, it’s probably a musk duck (Biziura lobata)—the only waterfowl species known that can learn sounds from other species. The Australian species’ facility for vocal learning had been mentioned anecdotally in the ornithological literature; now, a paper published September 6 in Philosophical Transactions of the Royal Society B reviews and discusses the evidence, which includes 34-year-old recordings made of a human-reared musk duck named Ripper engaging in an aggressive display while quacking “you bloody fool.” Ripper quacking "you bloody fool" while being provoked by a person separated from him by a fence The Scientist spoke with the lead author on the paper, Leiden University animal behavior researcher Carel ten Cate, to learn more about these unique ducks and what their unexpected ability reveals about the evolution of vocal learning. The Scientist: What is vocal learning? Carel ten Cate: Vocal learning, as it is used in this case, is that animals and humans, they learn their sounds from experience. So they learn from what they hear around them, which will usually be the parents, but it can also be other individuals. And if they don’t get that sort of exposure, then they will be unable to produce species-specific vocalizations, or in the human case, speech sounds and proper spoken language. © 1986–2021 The Scientist.

Keyword: Language; Evolution
Link ID: 27987 - Posted: 09.13.2021

By Carolyn Wilke Babies may laugh like some apes a few months after birth before transitioning to chuckling more like human adults, a new study finds. Laughter links humans to great apes, our evolutionary kin (SN: 6/4/09). Human adults tend to laugh while exhaling (SN: 6/10/15), but chimpanzees and bonobos mainly laugh in two ways. One is like panting, with sound produced on both in and out breaths, and the other has outbursts occurring on exhales, like human adults. Less is known about how human babies laugh. So Mariska Kret, a cognitive psychologist at Leiden University in the Netherlands, and colleagues scoured the internet for videos with laughing 3- to 18-month-olds, and asked 15 speech sound specialists and thousands of novices to judge the babies’ laughs. After evaluating dozens of short audio clips, experts and nonexperts alike found that younger infants laughed during inhalation and exhalation, while older infants laughed more on the exhale. That finding suggests that infants’ laughter becomes less apelike with age, the researchers report in the September Biology Letters. Humans start to laugh around 3 months of age, but early on, “it hasn’t reached its full potential,” Kret says. Both babies’ maturing vocal tracts and their social interactions may influence the development of the sounds, the researchers say.

Keyword: Language; Evolution
Link ID: 27983 - Posted: 09.11.2021

By Jonathan Lambert At least 65 million years of evolution separate humans and greater sac-winged bats, but these two mammals share a key feature of learning how to speak: babbling. Just as human infants babble their way from “da-da-da-da” to “Dad,” wild bat pups (Saccopteryx bilineata) learn the mating and territorial songs of adults by first babbling out the fundamental syllables of the vocalizations, researchers report in the Aug. 20 Science. These bats now join humans as the only clear examples of mammals who learn to make complex vocalizations through babbling. “This is a hugely important step forward in the study of vocal learning,” says Tecumseh Fitch, an evolutionary biologist at the University of Vienna not involved in the new study. “These findings suggest that there are deep parallels between how humans and young bats learn to control their vocal apparatus,” he says. The work could enable future studies that might allow researchers to peer deeper into the brain activity that underpins vocal learning. Before complex vocalizations, whether words or mating songs, can be spoken or sung, vocalizers must learn to articulate the syllables that make up a species’s vocabulary, says Ahana Fernandez, an animal behavior biologist at the Museum für Naturkunde in Berlin. “Babbling is a way of practicing,” and honing those vocalizations, she says. The rhythmic, repetitive “ba-ba-ba’s” and “ga-ga-ga’s” of human infants may sound like gibberish, but they are necessary exploratory steps toward learning how to talk. Seeing whether babbling is required for any animal that learns complex vocalizations necessitates looking in other species. © Society for Science & the Public 2000–2021.

Keyword: Language; Hearing
Link ID: 27957 - Posted: 08.21.2021

Lydia Denworth Lee Reeves always wanted to be a veterinarian. When he was in high school in the Washington, D.C., suburbs, he went to an animal hospital near his house on a busy Saturday morning to apply for a job. The receptionist said the doctor was too busy to talk. But Reeves was determined and waited. Three and a half hours later, after all the dogs and cats had been seen, the veterinarian emerged and asked Reeves what he could do for him. Reeves, who has stuttered since he was three years old, had trouble answering. “I somehow struggled out the fact that I wanted the job and he asked me what my name was,” he says. “I couldn’t get my name out to save my life.” The vet finally reached for a piece of paper and had Reeves write down his name and add his phone number, but he said there was no job available. “I remember walking out of that clinic that morning thinking that essentially my life was over,” Reeves says. “Not only was I never going to become a veterinarian, but I couldn’t even get a job cleaning cages.” More than 50 years have passed. Reeves, who is now 72, has gone on to become an effective national advocate for people with speech impairments, but the frustration and embarrassment of that day are still vivid. They are also emblematic of the complicated experience that is stuttering. Technically, stuttering is a disruption in the easy flow of speech, but the physical struggle and the emotional effects that often go with it have led observers to wrongly attribute the condition to defects of the tongue or voice box, problems with cognition, emotional trauma or nervousness, forcing left-handed children to become right-handed, and, most unfortunately, poor parenting. Freudian psychiatrists thought stuttering represented “oral-sadistic conflict,” whereas the behavioralists argued that labeling a child a stutterer would exacerbate the problem. Reeves’s parents were told to call no attention to his stutter—wait it out, and it would go away. © 2021 Scientific American,

Keyword: Language
Link ID: 27942 - Posted: 08.11.2021

Katharine Sanderson Liz Williams was standing pitchside at a women’s rugby match, and she did not like what she was seeing. Williams, who researches forensic biomechanics at Swansea University, UK, had equipped some of the players with a mouthguard that contained a sensor to measure the speed of head movement. She wanted to understand more about head injuries in the brutal sport. “There were a few instances when my blood went cold,” Williams said. When the women fell in a tackle, their heads would often whiplash into the ground. The sensors showed that the skull was accelerating — indicating an increased risk of brain injury. But medical staff at the match, not trained to look out for this type of head movement as a cause of injury, deemed the women fine to play on. Such whiplash injuries are much rarer when males play. Williams’ observations highlight an increasingly apparent problem. A growing body of data suggests that female athletes are at significantly greater risk of a traumatic brain injury event than male athletes. They also fare worse after a concussion and take longer to recover. As researchers gather more data, the picture becomes steadily more alarming. Female athletes are speaking out about their own experiences, including Sue Lopez, the United Kingdom’s first semi-professional female football player in the 1970s, who now has dementia — a diagnosis she has linked to concussions from heading the ball. Researchers have offered some explanations for the greater risk to women, although the science is at an early stage. Their ideas range from differences in the microstructure of the brain to the influence of hormones, coaching regimes, players’ level of experience and the management of injuries. © 2021 Springer Nature Limited

Keyword: Brain Injury/Concussion; Sexual Behavior
Link ID: 27932 - Posted: 08.04.2021

By Alistair Magowan BBC Sport Defenders are more likely to have dementia in later life compared with other playing positions in football, says new research. In 2019, a study by Professor Willie Stewart found that former footballers were about three and a half times more likely to die of neurodegenerative brain disease than the general population. But his new research says the risk is highest among defenders, who are five times more likely to have dementia than non-footballers. That compared with three times the risk for forwards, and almost no extra risk for goalkeepers compared with the population. Outfield players were four times more likely to have brain disease such as dementia. The research by the University of Glasgow, which was funded by the Football Association and players' union the Professional Footballers' Association, also found that risk increased the longer a player's football career was. And despite changes in football technology and head-injury management in recent years, there was no evidence that neurodegenerative disease risk changed for footballers in this study, whose careers spanned from about 1930 to the late 1990s. 'Footballs should be sold with a health warning about heading' Study author and consultant neuropathologist Dr Stewart said that it was time for football to eliminate the risk of heading, which he said could also cause short-term impairment of brain function. "I think footballs should be sold with a health warning saying repeated heading in football may lead to increased risks of dementia," he said. "Unlike other dementias and degenerative diseases, where we have no idea what causes them, we know the risk factor [with football] and it's entirely preventable. © 2021 BBC.

Keyword: Brain Injury/Concussion
Link ID: 27931 - Posted: 08.04.2021

By Pam Belluck He has not been able to speak since 2003, when he was paralyzed at age 20 by a severe stroke after a terrible car crash. Now, in a scientific milestone, researchers have tapped into the speech areas of his brain — allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays his intended words on the screen. His first recognizable sentence, researchers said, was, “My family is outside.” The achievement, published on Wednesday in the New England Journal of Medicine, could eventually help many patients with conditions that steal their ability to talk. “This is farther than we’ve ever imagined we could go,” said Melanie Fried-Oken, a professor of neurology and pediatrics at Oregon Health & Science University, who was not involved in the project. Three years ago, when Pancho, now 38, agreed to work with neuroscience researchers, they were unsure if his brain had even retained the mechanisms for speech. “That part of his brain might have been dormant, and we just didn’t know if it would ever really wake up in order for him to speak again,” said Dr. Edward Chang, chairman of neurological surgery at University of California, San Francisco, who led the research. The team implanted a rectangular sheet of 128 electrodes, designed to detect signals from speech-related sensory and motor processes linked to the mouth, lips, jaw, tongue and larynx. In 50 sessions over 81 weeks, they connected the implant to a computer by a cable attached to a port in Pancho’s head, and asked him to try to say words from a list of 50 common ones he helped suggest, including “hungry,” “music” and “computer.” As he did, electrodes transmitted signals through a form of artificial intelligence that tried to recognize the intended words. © 2021 The New York Times Company

Keyword: Brain imaging; Language
Link ID: 27913 - Posted: 07.17.2021

By Melissa J. Coleman, Eric Fortune A fundamental feature of vocal communication is taking turns: when one person says something, the other person listens and then responds. Turn-taking requires precise coordination of the timing of signals between individuals. We have all found over the past year communicating over Zoom that disruptions of the timing of auditory cues—like those annoying delays caused by poor connections—make effective communication difficult and frustrating. How do the brains of two individuals synchronize their activity patterns for rapid turn-taking during vocal communication? We addressed this question in a recently published paper by studying turn-taking in a specialist, the plain-tailed wren (Pheugopedius euophrys), which sings precisely timed duets. Our findings demonstrate the ability to coordinate relies on sensory cues from one partner that temporarily inhibit vocalizations in the other. These birds sing duets in which females and males alternate their vocalizations, called syllables, so rapidly it sounds as if a single bird is singing. These wrens live in dense bamboo on the slopes of the Andes. To study the neural basis of duet singing, we flew to Ecuador where we loaded up a truck with equipment and drove to a remote field-site called the Yanayacu Biological Field Station and Center for Creative Studies. Much of our equipment required electricity, so we had to bring car batteries for backup and used a six-meter copper rod that we drove into the soft mountain earth for our electrical ground. Our “lab bench” was a door that we placed on two Pelican suitcases. First, we had to catch pairs of wrens, so we hacked through bamboo with machetes and set up mist nets. We then attracted pairs to the nets by playing the duets of wrens. To see how neurons responded during duets, we surgically implanted very small wires into a specific region of the brain, called HVC. Neurons in this region are responsible for producing the song—that is, they are premotor—and they also respond to auditory signals. To transmit the neural signals (i.e., action potentials) to a computer, a small wireless digital transmitter was then connected to the wires. We then had to wait for the birds to sing their remarkable duets. © 2021 Scientific American,

Keyword: Animal Communication; Language
Link ID: 27908 - Posted: 07.14.2021