Chapter 15. Language and Lateralization

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2696

By Anthony Ham What is the meaning of a cat’s meow that grows louder and louder? Or your pet’s sudden flip from softly purring as you stroke its back to biting your hand? It turns out these misunderstood moments with your cat may be more common than not. A new study by French researchers, published last month in the journal Applied Animal Behaviour Science, found that people were significantly worse at reading the cues of an unhappy cat (nearly one third got it wrong) than those of a contented cat (closer to 10 percent). The study also suggested that a cat’s meows and other vocalizations are greatly misinterpreted and that people should consider both vocal and visual cues to try to determine what’s going on with their pets. The researchers drew these findings from the answers of 630 online participants; respondents were volunteers recruited through advertisements on social media. Each watched 24 videos of differing cat behaviors. One third depicted only vocal communication, another third just visual cues, and the remainder involved both. “Some studies have focused on how humans understand cat vocalizations,” said Charlotte de Mouzon, lead author of the study and a cat behavior expert at the Université Paris Nanterre. “Other studies studied how people understand cats’ visual cues. But studying both has never before been studied in human-cat communication.” Cats display a wide range of visual signals: tails swishing side to side, or raised high in the air; rubbing and curling around our legs; crouching; flattening ears or widening eyes. Their vocals can range from seductive to threatening: meowing, purring, growling, hissing and caterwauling. At last count, kittens were known to use nine different forms of vocalization, while adult cats uttered 16. That we could better understand what a cat wants by using visual and vocal cues may seem obvious. But we know far less than we think we do. © 2024 The New York Times Compan

Keyword: Animal Communication; Evolution
Link ID: 29169 - Posted: 02.29.2024

Rob Stein Benjamin Franklin famously wrote: "In this world nothing can be said to be certain, except death and taxes." While that may still be true, there's a controversy simmering today about one of the ways doctors declare people to be dead. The debate is focused on the Uniform Determination of Death Act, a law that was adopted by most states in the 1980s. The law says that death can be declared if someone has experienced "irreversible cessation of all functions of the entire brain." But some parts of the brain can continue to function in people who have been declared brain dead, prompting calls to revise the statute. Many experts say the discrepancy needs to be resolved to protect patients and their families, maintain public trust and reconcile what some see as a troubling disconnect between the law and medical practice. The debate became so contentious, however, that the Uniform Law Commission, the group charged with rewriting model laws for states, paused its process last summer because participants couldn't reach a consensus. "I'm worried," says Thaddeus Pope, a bioethicist and lawyer at Mitchell Hamline School of Law in St. Paul, Minnesota. "There's a lot of conflict at the bedside over this at hospitals across the United States. Let's get in front of it and fix it before it becomes a crisis. It's such an important question that everyone needs to be on the same page." The second method, brain death, can be declared for people who have sustained catastrophic brain injury causing the permanent cessation of all brain function, such as from a massive traumatic brain injury or massive stroke, but whose hearts are still pumping through the use of ventilators or other artificial forms of life support. © 2024 npr

Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 29147 - Posted: 02.13.2024

By Fletcher Reveley One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen: “I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.” The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind. For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story: “Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.” The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.”

Keyword: Brain imaging; Language
Link ID: 29073 - Posted: 01.03.2024

By Gary Stix This year was full of roiling debate and speculation about the prospect of machines with superhuman capabilities that might, sooner than expected, leave the human brain in the dust. A growing public awareness of ChatGPT and other so-called large language models (LLMs) dramatically expanded public awareness of artificial intelligence. In tandem, it raised the question of whether the human brain can keep up with the relentless pace of AI advances. The most benevolent answer posits that humans and machines need not be cutthroat competitors. Researchers found one example of potential cooperation by getting AI to probe the infinite complexity of the ancient game of Go—which, like chess, has seen a computer topple the highest-level human players. A study published in March showed how people might learn from machines with such superhuman skills. And understanding ChatGPT’s prodigious abilities offers some inkling as to why an equivalence between the deep neural networks that underlie the famed chatbot and the trillions of connections in the human brain is constantly invoked. Importantly, the machine learning incorporated into AI has not totally distracted mainstream neuroscience from avidly pursuing better insights into what has been called “the most complicated object in the known universe”: the brain. One of the grand challenges in science—understanding the nature of consciousness—received its due in June with the prominent showcasing of experiments that tested the validity of two competing theories, both of which purport to explain the underpinnings of the conscious self. The past 12 months provided lots of examples of impressive advances for you to store in your working memory. Now here’s a closer look at some of the standout mind and brain stories we covered in Scientific American in 2023. © 2023 SCIENTIFIC AMERICAN

Keyword: Brain imaging; Consciousness
Link ID: 29069 - Posted: 12.31.2023

By Esther Landhuis When Frank Lin was in junior high, his grandma started wearing hearing aids. During dinner conversations, she was often painfully silent, and communicating by phone was nearly impossible. As a kid, Lin imagined “what her life would be like if she wasn’t always struggling to communicate.” It was around that time that Lin became interested in otolaryngology, the study of the ears, nose, and throat. He would go on to study to be an ENT physician, which, he hoped, could equip him to help patients with similar age-related hardships. Those aspirations sharpened during his residency at Johns Hopkins University School of Medicine in the late 2000s. Administering hearing tests in the clinic, Lin noticed that his colleagues had vastly different reactions to the same results in young versus old patients. If mild deficits showed up in a kid, “it would be like, ‘Oh, that hearing is critically important,’” said Lin, who today is the director of the Cochlear Center for Hearing and Public Health at Hopkins. But when they saw that same mild to moderate hearing loss in a 70-something patient, many would downplay the findings. Yet today, research increasingly suggests that untreated hearing loss puts people at higher risk for cognitive decline and dementia. And, unlike during Lin’s early training, many patients can now do something about it: They can assess their own hearing using online tests or mobile phone apps, and purchase over-the-counter hearing aids, which are generally more affordable their predecessors and came under regulation by the Food and Drug Administration in October 2022. Despite this expanded accessibility, interest in direct-to-consumer hearing devices has lagged thus far — in part, experts suggest, due to physician inattention to adult hearing health, inadequate insurance coverage for hearing aids, and lingering stigma around the issue. (As Lin put it: “There’s always been this notion that everyone has it as you get older, how can it be important?”) Even now, hearing tests aren’t necessarily recommended for individuals unless they report a problem.

Keyword: Hearing; Alzheimers
Link ID: 29064 - Posted: 12.27.2023

By Cathleen O’Grady Why do some children learn to talk earlier than others? Linguists have pointed to everything from socioeconomic status to gender to the number of languages their parents speak. But a new study finds a simpler explanation. An analysis of nearly 40,000 hours of audio recordings from children around the world suggests kids speak more when the adults around them are more talkative, which may also give them a larger vocabulary early in life. Factors such as social class appear to make no difference, researchers report this month in the Proceedings of the National Academy of Sciences. The paper is a “wonderful, impactful, and much needed contribution to the literature,” says Ece Demir-Lira, a developmental scientist at the University of Iowa who was not involved in the work. By looking at real-life language samples from six different continents, she says, the study provides a global view of language development sorely lacking from the literature. Most studies on language learning have focused on children in Western, industrialized nations. To build a more representative data set, Harvard University developmental psychologist Elika Bergelson and her collaborators scoured the literature for studies that had used LENA devices: small audio recorders that babies can wear—tucked into a pocket on a specially made vest—for days at a time. These devices function as a kind of “talk pedometer,” with an algorithm that estimates how much its wearer speaks, as well as how much language they hear in their environment—from parents, other adults, and even siblings. The team asked 18 research groups across 12 countries whether they would share their data from the devices, leaving them with a whopping 2865 days of recordings from 1001 children. Many of the kids, who ranged from 2 months to 4 years old, were from English-speaking families, but the data also included speakers of Dutch, Spanish, Vietnamese, and Finnish, as well as Yélî Dnye (Papua New Guinea), Wolof (Senegal), and Tsimané (Bolivia). Combining these smaller data sets gave the researchers a more powerful, diverse sample.

Keyword: Language; Development of the Brain
Link ID: 29061 - Posted: 12.22.2023

A new study shows male zebra finches must sing every day to keep their vocal muscles in shape. Females prefer the songs of males that did their daily vocal workout. Sponsor Message ARI SHAPIRO, HOST: Why do songbirds sing so much? Well, a new study suggests they have to to stay in shape. Here's NPR's Ari Daniel. ARI DANIEL, BYLINE: A few years ago, I was out at dawn in South Carolina low country, a mix of swamp and trees draped in Spanish moss. (SOUNDBITE OF BIRDS CHIRPING) DANIEL: The sound of birdsong filled the air. It's the same in lots of places. Once the light of day switches on, songbirds launch their serenade. IRIS ADAM: I mean, why birds sing is relatively well-answered. DANIEL: Iris Adam is a behavioral neuroscientist at the University of Southern Denmark. ADAM: For many songbirds, males sing to impress a female and attract them as mate. And also, birds sing to defend their territory. DANIEL: But Adam says these reasons don't explain why songbirds sing so darn much. ADAM: There's an insane drive to sing. DANIEL: For some, it's hours every day. That's a lot of energy. Plus, singing can be dangerous. ADAM: As soon as you sing, you reveal yourself - like, where you are, that you even exist, where your territory is. All of that immediately is out in the open for predators, for everybody. DANIEL: Why take that risk? Adam wondered whether the answer might lie in the muscles that produce birdsong and if those muscles require regular exercise. So she designed a series of experiments on zebra finches, little Australian songbirds with striped heads and a bloom of orange on their cheeks. One of Adam's first experiments involved taking males and severing the connection between their brains and their singing muscles. ADAM: Already after two days, they had lost some of their performance. And after three weeks, they were back to the same level when they were juveniles and never had sung before. DANIEL: Next, she left the finches intact but prevented them from singing for a week by keeping them in the dark almost around the clock. ADAM: The first two or three days, it's quite easy. But the longer the experiment goes, the more they are like, I need to sing. And so then you need to tell them, like, stop. You can't sing. DANIEL: After a week, the birds' singing muscles lost half their strength. But does that impact what the resulting song sounds like? Here's a male before the seven days of darkness. © 2023 npr

Keyword: Animal Communication; Language
Link ID: 29042 - Posted: 12.13.2023

By Carl Zimmer Traumatic brain injuries have left more than five million Americans permanently disabled. They have trouble focusing on even simple tasks and often have to quit jobs or drop out of school. A study published on Monday has offered them a glimpse of hope. Five people with moderate to severe brain injuries had electrodes implanted in their heads. As the electrodes stimulated their brains, their performance on cognitive tests improved. If the results hold up in larger clinical trials, the implants could become the first effective therapy for chronic brain injuries, the researchers said. “This is the first evidence that you can move the dial for this problem,” said Dr. Nicholas Schiff, a neurologist at Weill Cornell Medicine in New York who led the study. Gina Arata, one of the volunteers who received the implant, was 22 when a car crash left her with fatigue, memory problems and uncontrollable emotions. She abandoned her plans for law school and lived with her parents in Modesto, Calif., unable to keep down a job. In 2018, 18 years after the crash, Ms. Arata received the implant. Her life has changed profoundly, she said. “I can be a normal human being and have a conversation,” she said. “It’s kind of amazing how I’ve seen myself improve.” Dr. Schiff and his colleagues designed the trial based on years of research on the structure of the brain. Those studies suggested that our ability to focus on tasks depends on a network of brain regions that are linked to each other by long branches of neurons. The regions send signals to each other, creating a feedback loop that keeps the whole network active. Sudden jostling of the brain — in a car crash or a fall, for example — can break some of the long-distance connections in the network and lead people to fall into a coma, Dr. Schiff and his colleagues have hypothesized. During recovery, the network may be able to power itself back up. But if the brain is severely damaged, it may not fully rebound. Dr. Schiff and his colleagues pinpointed a structure deep inside the brain as a crucial hub in the network. Known as the central lateral nucleus, it is a thin sheet of neurons about the size and shape of an almond shell. © 2023 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 29033 - Posted: 12.06.2023

Anil Oza Researchers have long known that areas of songbird brains that are responsible for singing grow during mating season and then shrink when the season is over. But one species, Gambel’s white-crowned sparrow (Zonotrichia leucophrys gambelii), does this on a scale that scientists are struggling to understand. A part of the male sparrow’s brain called the HVC grows from around 100,000 neurons to about 170,000 — nearly doubling in size — during the bird’s mating season. Although how the bird pulls off this feat is still a mystery, scientists who presented data at the annual Society for Neuroscience meeting in Washington DC on 11–15 November are closing in on answers. They hope their findings might one day point to ways of treating anomalies in the human brain. In most animals, when a brain region grows and shrinks, “frequently, it’s pretty detrimental on behaviour and function of the brain”, says Tracy Larson, a neuroscientist at the University of Virginia in Charlottesville who led the work. In particular, growth on this scale in mammals would cause inflammation and increase the pressure inside their skulls. But when it comes to the sparrows, “there’s something really fascinating about these birds that they can manage to do this and not have detrimental impacts”, Larson adds. Larson’s research has so far hinted that the sparrow’s brain is using a slew of tactics to quickly form and then kill a large number of neurons. One question that Larson wanted to answer is how the sparrow’s brain shrinks dramatically at the end of mating season. So she and her colleagues tagged cells in and around the HVCs of male sparrows with a molecule called bromodeoxyuridine (BrdU), which can become incorporated into the DNA of dividing cells. They also used hormone supplements to simulate breeding season in the birds. © 2023 Springer Nature Limited

Keyword: Sexual Behavior; Hormones & Behavior
Link ID: 29029 - Posted: 12.02.2023

By Claudia López Lloreda Genetic tweaks in kingfishers might help cushion the blow when the diving birds plunge beak first into the water to catch fish. Analysis of the genetic instruction book of some diving kingfishers identified changes in genes related to brain function as well as retina and blood vessel development, which might protect against damage during dives, researchers report October 24 in Communications Biology. The results suggest the different species of diving kingfishers may have adapted to survive their dives unscathed in some of the same ways, but it’s still unclear how the genetic changes protect the birds. Hitting speeds of up to 40 kilometers per hour, kingfisher dives put huge amounts of potentially damaging pressure on the birds’ heads, beaks and brains. The birds dive repeatedly, smacking their heads into the water in ways that could cause concussions in humans, says Shannon Hackett, an evolutionary biologist and curator at the Field Museum in Chicago. “So there has to be something that protects them from the terrible consequences of repeatedly hitting their heads against a hard substrate.” Hackett first became interested in how the birds protect their brains while she worked with her son’s hockey team and started worrying about the effect of repeated hits on the human brain. Around the same time, evolutionary biologist Chad Eliason joined the museum to study kingfishers and their plunge diving behavior. In the new study, Hackett, Eliason and colleagues analyzed the complete genome of 30 kingfisher species, some that plunge dive and others that don’t, from specimens frozen and stored at the museum. The preserved birds came from all over the world; some of the diving species came from mainland areas and others from islands and had evolved to dive independently rather than from the same plunge-diving ancestor. The team wanted to know if the different diving species had evolved similar genetic changes to arrive at the same behaviors. Many kingfisher species have developed this behavior, but it was unclear whether this was through genetic convergence, similar to how many species of birds have lost their flight or how bats and dolphins independently developed echolocation (SN: 9/6/2013). © Society for Science & the Public 2000–2023.

Keyword: Brain Injury/Concussion; Evolution
Link ID: 28991 - Posted: 11.08.2023

By Hallie Levine Every 40 seconds, someone in the United States has a stroke, and about three-quarters occur in people ages 65 and older. “As people age, their arteries have a tendency to become less flexible,” and clogged arteries are more likely, says Doris Chan, an interventional cardiologist at NYU Langone Health. This hikes the risk of an ischemic stroke — the most common type — when a blood vessel to the brain becomes blocked by a blood clot. But about 80 percent of all strokes are preventable, according to the Centers for Disease Control and Prevention. And the lifestyle steps you take can be especially powerful in fending off stroke. Here’s what you can do to reduce your risk. 1. Watch these issues. Keeping certain conditions at bay or managing them properly can cut the likelihood of a stroke. Take high blood pressure, which some research suggests is responsible for almost half of strokes. A heart-healthy eating plan may help control it. Also, try to limit sodium to less than 1,500 milligrams a day, maintain a healthy weight and exercise regularly, says Sahil Khera, an interventional cardiologist at the Mount Sinai Hospital in New York. If your blood pressure is high even with the above measures, ask your doctor what levels you should strive for and whether meds are appropriate. Staying out of the hypertensive range can be challenging with age because of the higher potential for medication side effects. While blood pressure below 120/80 can reduce cardiovascular risk, that target should be adjusted if side effects such as dizziness occur, says Hardik Amin, an associate professor of neurology at the Yale School of Medicine in New Haven, Conn. Another important condition to watch for is atrial fibrillation (AFib), an irregular and often rapid heartbeat, which affects at least 10 percent of people over age 80, according to a 2022 study in the Journal of the American College of Cardiology. People with AFib are about five times as likely to have a stroke.

Keyword: Stroke; Drug Abuse
Link ID: 28974 - Posted: 10.28.2023

By Liz Fuller-Wright, The latest exploration of music in the natural world is taking place in Mala Murthy ’s lab at the Princeton Neuroscience Institute, where Murthy and her research group have used neural imaging, optogenetics, motion capture, modeling and artificial intelligence to pinpoint precisely where and how a fruit fly’s brain toggles between its standard solo and its mating serenade. Their research appears in the current issue of the journal Nature. “For me it is very rewarding that, in a team of exceptional scientists coming from different backgrounds, we joined forces and methodologies to figure out the key characteristics of a neural circuit that can explain a complex behavior — the patterning of courtship song,” said Frederic Römschied, first author on this paper and a former postdoctoral fellow in Murthy’s lab. He is now a group leader at the European Neuroscience Institute in Göttingen, Germany. “It might be a surprise to discover that the fruit flies buzzing around your banana can sing, but it’s more than music, it’s communication,” said Murthy, the Karol and Marnie Marcin ’96 Professor and the director of the Princeton Neuroscience Institute. “It’s a conversation, with a back and forth. He sings, and she slows down, and she turns, and then he sings more. He’s constantly assessing her behavior to decide exactly how to sing. They’re exchanging information in this way. Unlike a songbird, belting out his song from his perch, he tunes everything into what she’s doing. It’s a dialogue.” It might be a surprise to discover that the fruit flies buzzing around your banana can sing, but it’s more than music, it’s communication. By studying how these tiny brains work, researchers hope to develop insights that will prove useful in the larger and more complex brains that are millions of times harder to study. In particular, Murthy’s team is trying to determine how the brain decides what behavior is appropriate in which context. © 2023 The Trustees of Princeton University

Keyword: Animal Communication; Sexual Behavior
Link ID: 28959 - Posted: 10.14.2023

By Benjamin Mueller Once their scalpels reach the edge of a brain tumor, surgeons are faced with an agonizing decision: cut away some healthy brain tissue to ensure the entire tumor is removed, or give the healthy tissue a wide berth and risk leaving some of the menacing cells behind. Now scientists in the Netherlands report using artificial intelligence to arm surgeons with knowledge about the tumor that may help them make that choice. The method, described in a study published on Wednesday in the journal Nature, involves a computer scanning segments of a tumor’s DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor. That diagnosis, generated during the early stages of an hourslong surgery, can help surgeons decide how aggressively to operate, the researchers said. In the future, the method may also help steer doctors toward treatments tailored for a specific subtype of tumor. “It’s imperative that the tumor subtype is known at the time of surgery,” said Jeroen de Ridder, an associate professor in the Center for Molecular Medicine at UMC Utrecht, a Dutch hospital, who helped lead the study. “What we have now uniquely enabled is to allow this very fine-grained, robust, detailed diagnosis to be performed already during the surgery.” A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know: ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam). © 2023 The New York Times Company

Keyword: Robotics; Intelligence
Link ID: 28958 - Posted: 10.12.2023

By Sonia Shah Can a mouse learn a new song? Such a question might seem whimsical. Though humans have lived alongside mice for at least 15,000 years, few of us have ever heard mice sing, because they do so in frequencies beyond the range detectable by human hearing. As pups, their high-pitched songs alert their mothers to their whereabouts; as adults, they sing in ultrasound to woo one another. For decades, researchers considered mouse songs instinctual, the fixed tunes of a windup music box, rather than the mutable expressions of individual minds. But no one had tested whether that was really true. In 2012, a team of neurobiologists at Duke University, led by Erich Jarvis, a neuroscientist who studies vocal learning, designed an experiment to find out. The team surgically deafened five mice and recorded their songs in a mouse-size sound studio, tricked out with infrared cameras and microphones. They then compared sonograms of the songs of deafened mice with those of hearing mice. If the mouse songs were innate, as long presumed, the surgical alteration would make no difference at all. Jarvis and his researchers slowed down the tempo and shifted the pitch of the recordings, so that they could hear the songs with their own ears. Those of the intact mice sounded “remarkably similar to some bird songs,” Jarvis wrote in a 2013 paper that described the experiment, with whistlelike syllables similar to those in the songs of canaries and the trills of dolphins. Not so the songs of the deafened mice: Deprived of auditory feedback, their songs became degraded, rendering them nearly unrecognizable. They sounded, the scientists noted, like “squawks and screams.” Not only did the tunes of a mouse depend on its ability to hear itself and others, but also, as the team found in another experiment, a male mouse could alter the pitch of its song to compete with other male mice for female attention. Inside these murine skills lay clues to a puzzle many have called “the hardest problem in science”: the origins of language. In humans, “vocal learning” is understood as a skill critical to spoken language. Researchers had already discovered the capacity for vocal learning in species other than humans, including in songbirds, hummingbirds, parrots, cetaceans such as dolphins and whales, pinnipeds such as seals, elephants and bats. But given the centuries-old idea that a deep chasm separated human language from animal communications, most scientists understood the vocal learning abilities of other species as unrelated to our own — as evolutionarily divergent as the wing of a bat is to that of a bee. The apparent absence of intermediate forms of language — say, a talking animal — left the question of how language evolved resistant to empirical inquiry. © 2023 The New York Times Company

Keyword: Language; Animal Communication
Link ID: 28921 - Posted: 09.21.2023

By Gina Kolata Tucker Marr’s life changed forever last October. He was on his way to a wedding reception when he fell down a steep flight of metal stairs, banging the right side of his head so hard he went into a coma. He’d fractured his skull, and a large blood clot formed on the left side of his head. Surgeons had to remove a large chunk of his skull to relieve pressure on his brain and to remove the clot. “Getting a piece of my skull taken out was crazy to me,” Mr. Marr said. “I almost felt like I’d lost a piece of me.” But what seemed even crazier to him was the way that piece was restored. Mr. Marr, a 27-year-old analyst at Deloitte, became part of a new development in neurosurgery. Instead of remaining without a piece of skull or getting the old bone put back, a procedure that is expensive and has a high rate of infection, he got a prosthetic piece of skull made with a 3-D printer. But it is not the typical prosthesis used in such cases. His prosthesis, which is covered by his skin, is embedded with an acrylic window that would let doctors peer into his brain with ultrasound. A few medical centers are offering such acrylic windows to patients who had to have a piece of skull removed to treat conditions like a brain injury, a tumor, a brain bleed or hydrocephalus. “It’s very cool,” Dr. Michael Lev, director of emergency radiology at Massachusetts General Hospital, said. But, “it is still early days,” he added. Advocates of the technique say that if a patient with such a window has a headache or a seizure or needs a scan to see if a tumor is growing, a doctor can slide an ultrasound probe on the patient’s head and look at the brain in the office. © 2023 The New York Times Company

Keyword: Brain imaging; Brain Injury/Concussion
Link ID: 28914 - Posted: 09.16.2023

By Darren Incorvaia By now, it’s no secret that the phrase “bird brain” should be a compliment, not an insult. Some of our feathered friends are capable of complex cognitive tasks, including tool use (SN: 2/10/23). Among the brainiest feats that birds are capable of is vocal learning, or the ability to learn to mimic sounds and use them to communicate. In birds, this leads to beautiful calls and songs; in humans, it leads to language. The best avian vocal learners, such as crows and parrots, also tend to be considered the most intelligent birds. So it’s natural to think that the two traits could be linked. But studies with smart birds have found conflicting evidence. Although vocal learning may be linked with greater cognitive capacity in some species, the opposite relationship seems to hold true in others. Now, a massive analysis of 214 birds from 23 species shows that there is indeed a link between vocal learning and at least one advanced cognitive ability — problem-solving. The study, described in the Sept. 15 Science, is the first to analyze multiple bird species instead of just one. More than 200 birds from 23 species were given different cognitive tests to gauge their intelligence. One of the problem-solving tasks asked birds to pull a cork lid off a glass flask to access a tasty treat (bottom left). Comparing these tests with birds’ ability to learn songs and calls showed that the better vocal learners are also better at problem-solving. To compare species, biologist Jean-Nicolas Audet of the Rockefeller University in New York City and colleagues had to devise a way to assess all the birds’ vocal learning and cognitive abilities. © Society for Science & the Public 2000–2023.

Keyword: Intelligence; Evolution
Link ID: 28912 - Posted: 09.16.2023

By Jori Lewis The squat abandoned concrete structure may have been a water tower when this tract of land in the grasslands of Mozambique was a cotton factory. Now it served an entirely different purpose: Housing a bat colony. To climb through the building’s low opening, bat researcher Césaria Huó and I had to battle a swarm of biting tsetse flies and clear away a layer of leaves and vines. My eyes quickly adjusted to the low light, but my nose, even behind a mask, couldn’t adjust to the smell of hundreds of bats and layers of bat guano—a fetid reek of urea with fishy, spicy overtones. But Huó had a different reaction. “I don’t mind the smell now,” she said. After several months of monitoring bat colonies in the Gorongosa National Park area as a master’s student in the park’s conservation biology program, Huó said she almost likes it. “Now, when I smell it, I know there are bats here.” Since we arrived at the tower during the daylight hours, I had expected the nocturnal mammals to be asleep. Instead, they were shaking their wings, flying from one wall or spot on the ceiling to another, swooping sometimes a bit too close to me for my comfort. But the bats didn’t care about me; they were cruising for mates. It was mating season, and we had lucked out to see their mating performances. Huó pointed out that some females were inspecting the males, checking out their wing flapping prowess. But Huó and her adviser, the polymath entomologist Piotr Naskrecki, did not bring me to this colony to view the bats’ seductive dances and their feats of flight, since those behaviors are already known to scientists. We were here to decipher what the bats were saying while doing them. Huó and Naskrecki had set up cameras and audio recorders the night before to learn more about these bats and try to understand the nature of the calls they use, listening for signs of meaning. © 2023 NautilusNext Inc., All rights reserved.

Keyword: Animal Communication; Evolution
Link ID: 28895 - Posted: 09.07.2023

By R. Douglas Fields One day, while threading a needle to sew a button, I noticed that my tongue was sticking out. The same thing happened later, as I carefully cut out a photograph. Then another day, as I perched precariously on a ladder painting the window frame of my house, there it was again! What’s going on here? I’m not deliberately protruding my tongue when I do these things, so why does it keep making appearances? After all, it’s not as if that versatile lingual muscle has anything to do with controlling my hands. Right? Yet as I would learn, our tongue and hand movements are intimately interrelated at an unconscious level. This peculiar interaction’s deep evolutionary roots even help explain how our brain can function without conscious effort. A common explanation for why we stick out our tongue when we perform precision hand movements is something called motor overflow. In theory, it can take so much cognitive effort to thread a needle (or perform other demanding fine motor skills) that our brain circuits get swamped and impinge on adjacent circuits, activating them inappropriately. It’s certainly true that motor overflow can happen after neural injury or in early childhood when we are learning to control our bodies. But I have too much respect for our brains to buy that “limited brain bandwidth” explanation. How, then, does this peculiar hand-mouth cross-talk really occur? Tracing the neural anatomy of tongue and hand control to pinpoint where a short circuit might happen, we find first of all that the two are controlled by completely different nerves. This makes sense: A person who suffers a spinal cord injury that paralyzes their hands does not lose their ability to speak. That’s because the tongue is controlled by a cranial nerve, but the hands are controlled by spinal nerves. Simons Foundation

Keyword: Language; Emotions
Link ID: 28894 - Posted: 08.30.2023

In a study of 152 deceased athletes less than 30 years old who were exposed to repeated head injury through contact sports, brain examination demonstrated that 63 (41%) had chronic traumatic encephalopathy (CTE), a degenerative brain disorder associated with exposure to head trauma. Neuropsychological symptoms were severe in both those with and without evidence of CTE. Suicide was the most common cause of death in both groups, followed by unintentional overdose. Among the brain donors found to have CTE, 71% had played contact sports at a non-professional level (youth, high school, or college competition). Common sports included American football, ice hockey, soccer, rugby, and wrestling. The study, published in JAMA Neurology, confirms that CTE can occur even in young athletes exposed to repetitive head impacts. The research was supported in part by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health. Because CTE cannot be definitively diagnosed in individuals while living, it is unknown how commonly CTE occurs in such athletes. As in all brain bank studies, donors differ from the general population and no estimates of prevalence can be concluded from this research. Most of the study donors were white, male football players with cognitive, behavioral, and/or mood symptoms. Their families desired neuropathologic examination after their loved one’s early death and donated to the Understanding Neurologic Injury and Traumatic Encephalopathy (UNITE) Brain Bank. There were no differences in cause of death or clinical symptoms between those with CTE and those without.

Keyword: Brain Injury/Concussion
Link ID: 28889 - Posted: 08.30.2023

By Pam Belluck At Ann Johnson’s wedding reception 20 years ago, her gift for speech was vividly evident. In an ebullient 15-minute toast, she joked that she had run down the aisle, wondered if the ceremony program should have said “flutist” or “flautist” and acknowledged that she was “hogging the mic.” Just two years later, Mrs. Johnson — then a 30-year-old teacher, volleyball coach and mother of an infant — had a cataclysmic stroke that paralyzed her and left her unable to talk. On Wednesday, scientists reported a remarkable advance toward helping her, and other patients, speak again. In a milestone of neuroscience and artificial intelligence, implanted electrodes decoded Mrs. Johnson’s brain signals as she silently tried to say sentences. Technology converted her brain signals into written and vocalized language, and enabled an avatar on a computer screen to speak the words and display smiles, pursed lips and other expressions. The research, published in the journal Nature, demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals, experts say. Mrs. Johnson chose the avatar, a face resembling hers, and researchers used her wedding toast to develop the avatar’s voice. “We’re just trying to restore who people are,” said the team’s leader, Dr. Edward Chang, the chairman of neurological surgery at the University of California, San Francisco. “It let me feel like I was a whole person again,” Mrs. Johnson, now 48, wrote to me. The goal is to help people who cannot speak because of strokes or conditions like cerebral palsy and amyotrophic lateral sclerosis. To work, Mrs. Johnson’s implant must be connected by cable from her head to a computer, but her team and others are developing wireless versions. Eventually, researchers hope, people who have lost speech may converse in real time through computerized pictures of themselves that convey tone, inflection and emotions like joy and anger. “What’s quite exciting is that just from the surface of the brain, the investigators were able to get out pretty good information about these different features of communication,” said Dr. Parag Patil, a neurosurgeon and biomedical engineer at the University of Michigan, who was asked by Nature to review the study before publication. © 2023 The New York Times Company

Keyword: Stroke; Robotics
Link ID: 28882 - Posted: 08.24.2023