Chapter 19. Language and Lateralization

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2516

By Will Hobson In 2017, Bennet Omalu traveled the globe to accept a series of honors and promote his autobiography, “Truth Doesn’t Have A Side.” In a visit to an Irish medical school, he told students he was a “nobody” who “discovered a disease in America’s most popular sport.” In an appearance on a religious cable TV show, he said he named the disease chronic traumatic encephalopathy, or CTE, because “it sounded intellectually sophisticated, with a very good acronym.” And since his discovery, Omalu told Sports Illustrated, researchers have uncovered evidence that shows adolescents who participate in football, hockey, wrestling and mixed martial arts are more likely to drop out of school, become addicted to drugs, struggle with mental illness, commit violent crimes and kill themselves. A Ni­ger­ian American pathologist portrayed by Will Smith in the 2015 film, “Concussion,” Omalu is partly responsible for the most important sports story of the 21st century. Since 2005, when Omalu first reported finding widespread brain damage in a former NFL player, concerns about CTE have inspired a global revolution in concussion safety and fueled an ongoing existential crisis for America’s most popular sport. Omalu’s discovery — initially ignored and then attacked by NFL-allied doctors — inspired an avalanche of scientific research that forced the league to acknowledge a link between football and brain disease. Nearly 15 years later, Omalu has withdrawn from the CTE research community and remade himself as an evangelist, traveling the world selling his frightening version of what scientists know about CTE and contact sports. In paid speaking engagements, expert witness testimony and in several books he has authored, Omalu portrays CTE as an epidemic and himself as a crusader, fighting against not just the NFL but also the medical science community, which he claims is too corrupted to acknowledge clear-cut evidence that contact sports destroy lives.

Keyword: Brain Injury/Concussion
Link ID: 26986 - Posted: 01.23.2020

Hannah Devlin Science correspondent The death in 2002 of the former England and West Bromwich Albion striker Jeff Astle from degenerative brain disease placed the spotlight firmly on the possibility of a link between heading footballs and the risk of dementia. The coroner at the inquest ruled that Astle, 59, died from an “industrial disease” brought on by the repeated trauma of headers, and a later examination of Astle’s brain appeared to bear out this conclusion. At that time there was sparse scientific data on the issue, but since then the balance of evidence has steadily tipped further in favour of a link. It has been shown that even single episodes of concussion can have lifelong consequences. Children in Scotland could be banned from heading footballs over dementia link Read more A 2016 study based on health records of more than 100,000 people in Sweden found that after a single diagnosed concussion people were more likely to have mental health problems and less likely to graduate from high school and college. Other research has shown that people in prison or homeless are more likely to have had a past experience of concussion. In 2017, researchers from University College London examined postmortem the brains of six former footballers who had developed dementia. They found signs of brain injury called chronic traumatic encephalopathy (CTE) in four cases. Last year a study by a team at Glasgow University found that former professional footballers were three and a half times more likely to die from dementia and other serious neurological diseases. The study was the largest ever, based on the health records of 7,676 ex-players and 23,000 members of the public, and was possibly the trigger for the Scottish FA’s plan to follow US soccer in banning heading the ball for young players. © 2020 Guardian News & Media Limited

Keyword: Brain Injury/Concussion
Link ID: 26969 - Posted: 01.17.2020

There are differences in the way English and Italian speakers are affected by dementia-related language problems, a small study suggests. While English speakers had trouble pronouncing words, Italian speakers came out with shorter, simpler sentences. The findings could help ensure accurate diagnoses for people from different cultures, the researchers said. Diagnostic criteria are often based on English-speaking patients. In the University of California study of 20 English-speaking patients and 18 Italian-speaking patients, all had primary progressive aphasia - a neuro-degenerative disease which affects areas of the brain linked to language. It is a feature of Alzheimer's disease and other dementia disorders. Brain scans and tests showed similar levels of cognitive function in people in both language groups. But when the researchers asked participants to complete a number of linguistic tests, they picked up obvious differences between the two groups in the challenges they faced. 'Easier to pronounce' "We think this is specifically because the consonant clusters that are so common in English pose a challenge for a degenerating speech-planning system," said study author Maria Luisa Gorno-Tempini, professor of neurology and psychiatry. "In contrast, Italian is easier to pronounce, but has much more complex grammar, and this is how Italian speakers with [primary progressive aphasia] tend to run into trouble." As a result, the English speakers tended to speak less while the Italian speakers had fewer pronunciation problems, but simplified what they did say. English is a Germanic language while Italian is a Romance language, derived from Latin along with French, Spanish and Portuguese. The researchers, writing in Neurology, are concerned that many non-native English speakers may not be getting the right diagnosis "because their symptoms don't match what is described in clinical manuals based on studies of native English speakers". The San Francisco research team says it now wants to repeat the research in larger groups of patients, and look for differences between speakers of other languages, such as Chinese and Arabic. © 2020 BBC

Keyword: Alzheimers; Language
Link ID: 26954 - Posted: 01.13.2020

By Kelly Servick The dark, thumping cavern of an MRI scanner can be a lonely place. How can scientists interested in the neural activity underlying social interactions capture an engaged, conversing brain while its owner is so isolated? Two research teams are advancing a curious solution: squeezing two people into one scanner. One such MRI setup is under development with new funding from the U.S. National Science Foundation (NSF), and another has undergone initial testing described in a preprint last month. These designs have yet to prove that their scientific payoff justifies their cost and complexity, plus the requirement that two people endure a constricted almost-hug, in some cases for 1 hour or more. But the two groups hope to open up new ways to study how brains exchange subtle social and emotional cues bound up in facial expressions, eye contact, and physical touch. The tool could “greatly expand the range of investigations possible,” says Winrich Freiwald, a neuroscientist at Rockefeller University. “This is really exciting.” Functional magnetic resonance imaging (fMRI), which measures blood oxygenation to estimate neural activity, is already a common tool for studying social processes. But compared with real social interaction, these experiments are “reduced and artificial,” says Lauri Nummenmaa, a neuroscientist at the University of Turku in Finland. Participants often look at static photos of faces or listen to recordings of speech while lying in a scanner. But photos can’t show the subtle flow of emotions across people’s faces, and recordings don’t allow the give and take of real conversation. © 2019 American Association for the Advancement of Science

Keyword: Brain imaging
Link ID: 26949 - Posted: 01.10.2020

By Lisa Sanders, M.D. The 67-year-old woman had just flown back to her old hometown, Eugene, Ore., to pick up one more load of boxes to move them to her new hometown, Homer, Alaska. As usual, the shuttle to long-term parking was nowhere in sight, so she pulled out the handles of her bags and wheeled them down the now-familiar airport road. It was a long walk — maybe half a mile — but it was a beautiful afternoon for it. A lone woman walking down this rarely used road in the airport caught the attention of Diana Chappell, an off-duty emergency medical technician, on her way to catch her own flight. She watched as the woman approached a building where some airport E.M.T.s were stationed. Suddenly the woman stopped. She rose to her toes and turned gracefully, then toppled over like a felled tree and just lay there. Chappell jumped out of the car and ran to the woman. She was awake but couldn’t sit up. Chappell helped her move to the side of the road and took a quick visual survey. The woman had a scrape over her left eye where her glasses had smashed into her face. Her left knee was bleeding, and her left wrist was swelling. She’d dropped the handle of one of her rolling bags, the woman explained. When she tried to pick it up, she fell. But she felt fine now. As she spoke, Chappell noticed that her speech was slightly slurred and that the left side of her mouth wasn’t moving normally. “I don’t know you, but your speech sounds a little slurred,” she said. “Have you been drinking?” Not at all, the woman answered — surprised by the question. Chappell introduced herself, then asked the woman if she could do a few quick tests to make sure she was O.K. Chappell asked her to smile, but the left side of the patient’s mouth did not cooperate; she asked her to shrug her shoulders, and the left side wouldn’t stay up. You need to go to the hospital, she told the woman. The woman protested; she felt fine. At least let me call my E.M.T. pals to check your blood pressure, Chappell insisted. After a fall like that, it could be high. The woman reluctantly agreed, and Chappell called her colleagues. The woman on the ground was embarrassed by the flashing lights of the emergency vehicle but allowed her blood pressure to be taken. It was sky-high. She really did need to go to the hospital. © 2020 The New York Times Company

Keyword: Stroke
Link ID: 26927 - Posted: 01.02.2020

By Catherine Matacic Falling in love is never easy. But do it in a foreign language, and complications pile up quickly, from your first fumbling attempts at deep expression to the inevitable quarrel to the family visit punctuated by remarks that mean so much more than you realize. Now, a study of two dozen terms related to emotion in nearly 2500 languages suggests those misunderstandings aren’t all in your head. Instead, emotional concepts like love, shame, and anger vary in meaning from culture to culture, even when we translate them into the same words. “I wish I had thought of this,” says Lisa Feldman Barrett, a neuroscientist and psychologist at Northeastern University in Boston. “It’s a very, very well-reasoned, clever approach.” People have argued about emotions since the ancient Greeks. Aristotle suggested they were essential to virtue. The stoics called them antithetical to reason. And in his “forgotten” masterpiece, The Expression of the Emotions in Man and Animals, Charles Darwin wrote that they likely had a single origin. He thought every culture the world over shared six basic emotions: happiness, sadness, fear, anger, surprise, and disgust. Since then, psychologists have looked for traces of these emotions in scores of languages. And although one common experiment, which asks participants to identify emotions from photographs of facial expressions, has led to many claims of universality, critics say an overreliance on concepts from Western, industrialized societies dooms such attempts from the start. © 2019 American Association for the Advancement of Science.

Keyword: Emotions; Language
Link ID: 26907 - Posted: 12.21.2019

Thomas R. Sawallis and Louis-Jean Boë Sound doesn’t fossilize. Language doesn’t either. Even when writing systems have developed, they’ve represented full-fledged and functional languages. Rather than preserving the first baby steps toward language, they’re fully formed, made up of words, sentences and grammar carried from one person to another by speech sounds, like any of the perhaps 6,000 languages spoken today. So if you believe, as we linguists do, that language is the foundational distinction between humans and other intelligent animals, how can we study its emergence in our ancestors? Happily, researchers do know a lot about language – words, sentences and grammar – and speech – the vocal sounds that carry language to the next person’s ear – in living people. So we should be able to compare language with less complex animal communication. And that’s what we and our colleagues have spent decades investigating: How do apes and monkeys use their mouth and throat to produce the vowel sounds in speech? Spoken language in humans is an intricately woven string of syllables with consonants appended to the syllables’ core vowels, so mastering vowels was a key to speech emergence. We believe that our multidisciplinary findings push back the date for that crucial step in language evolution by as much as 27 million years. The sounds of speech Say “but.” Now say “bet,” “bat,” “bought,” “boot.” The words all begin and end the same. It’s the differences among the vowel sounds that keep them distinct in speech. © 2010–2019, The Conversation US, Inc.

Keyword: Language; Evolution
Link ID: 26893 - Posted: 12.12.2019

By Nicholas Bakalar Sleeping a lot may increase the risk for stroke, a new study has found. Chinese researchers followed 31,750 men and women whose average age was 62 for an average of six years, using physical examinations and self-reported data on sleep. They found that compared with sleeping (or being in bed trying to sleep) seven to eight hours a night, sleeping nine or more hours increased the relative risk for stroke by 23 percent. Sleeping less than six hours a night had no effect on stroke incidence. The study, in Neurology, also found that midday napping for more than 90 minutes a day was associated with a 25 percent increased risk for stroke compared with napping 30 minutes or less. And people who both slept more than nine hours and napped more than 90 minutes were 85 percent more likely to have a stroke. The study controlled for smoking, drinking, exercise, family history of stroke, body mass index and other health and behavioral characteristics. The reason for the association is unclear, but long sleep duration is associated with increased inflammation, unfavorable lipid profiles and increased waist circumference, factors known to increase cardiovascular risk. © 2019 The New York Times Company

Keyword: Stroke; Sleep
Link ID: 26890 - Posted: 12.12.2019

By Viorica Marian Psycholinguistics is a field at the intersection of psychology and linguistics, and one if its recent discoveries is that the languages we speak influence our eye movements. For example, English speakers who hear candle often look at a candy because the two words share their first syllable. Research with speakers of different languages revealed that bilingual speakers not only look at words that share sounds in one language but also at words that share sounds across their two languages. When Russian-English bilinguals hear the English word marker, they also look at a stamp, because the Russian word for stamp is marka. Even more stunning, speakers of different languages differ in their patterns of eye movements when no language is used at all. In a simple visual search task in which people had to find a previously seen object among other objects, their eyes moved differently depending on what languages they knew. For example, when looking for a clock, English speakers also looked at a cloud. Spanish speakers, on the other hand, when looking for the same clock, looked at a present, because the Spanish names for clock and present—reloj and regalo—overlap at their onset. The story doesn’t end there. Not only do the words we hear activate other, similar-sounding words—and not only do we look at objects whose names share sounds or letters even when no language is heard—but the translations of those names in other languages become activated as well in speakers of more than one language. For example, when Spanish-English bilinguals hear the word duck in English, they also look at a shovel, because the translations of duck and shovel—pato and pala, respectively—overlap in Spanish. © 2019 Scientific American

Keyword: Language; Attention
Link ID: 26875 - Posted: 12.06.2019

By Virginia Morell Say “sit!” to your dog, and—if he’s a good boy—he’ll likely plant his rump on the floor. But would he respond correctly if the word were spoken by a stranger, or someone with a thick accent? A new study shows he will, suggesting dogs perceive spoken words in a sophisticated way long thought unique to humans. “It’s a very solid and interesting finding,” says Tecumseh Fitch, an expert on vertebrate communication at the University of Vienna who was not involved in the research. The way we pronounce words changes depending on our sex, age, and even social rank. Some as-yet-unknown neural mechanism enables us to filter out differences in accent and pronunciation, helping us understand spoken words regardless of the speaker. Animals like zebra finches, chinchillas, and macaques can be trained to do this, but until now only humans were shown to do this spontaneously. In the new study, Holly Root-Gutteridge, a cognitive biologist at the University of Sussex in Brighton, U.K., and her colleagues ran a test that others have used to show dogs can recognize other dogs from their barks. The researchers filmed 42 dogs of different breeds as they sat with their owners near an audio speaker that played six monosyllabic, noncommand words with similar sounds, such as “had,” “hid,” and “who’d.” The words were spoken—not by the dog’s owner—but by several strangers, men and women of different ages and with different accents. © 2019 American Association for the Advancement of Science.

Keyword: Language; Evolution
Link ID: 26866 - Posted: 12.04.2019

Nicola Davis Dolphins, like humans, have a dominant right-hand side, according to research. About 90% of humans are right-handed but we are not the only animals that show such preferences: gorillas tend to be right-handed, kangaroos are generally southpaws, and even cats have preferences for a particular side – although which is favoured appears to depend on their sex. Now researchers have found common bottlenose dolphins appear to have an even stronger right-side bias than humans. “I didn’t expect to find it in that particular behaviour, and I didn’t expect to find such a strong example,” said Dr Daisy Kaplan, co-author of the study from the Dolphin Communication Project, a non-profit organisation in the US. Researchers studying common bottlenose dolphins in the Bahamas say the preference shows up in crater feeding, whereby dolphins swim close to the ocean floor, echolocating for prey, before shoving their beaks into the sand to snaffle a meal. Writing in the journal Royal Society Open Science, Kaplan and colleagues say the animals make a sharp and sudden turn before digging in with their beaks. Crucially, however, they found this turn is almost always to the left, with the same direction taken in more than 99% of the 709 turns recorded between 2012 and 2018. The researchers say the findings indicate a right-side bias, since a left turn keeps a dolphin’s right eye and right side close to the ocean floor. The team found only four turns were made to the right and all of these were made by the same dolphin, which had an oddly shaped right pectoral fin. However the Kaplan said it was unlikely this fin was behind the right turns: two other dolphins had an abnormal or missing right fin yet still turned left.

Keyword: Laterality; Evolution
Link ID: 26861 - Posted: 12.02.2019

By James Gorman TEMPE, Ariz. — Xephos is not the author of “Dog Is Love: Why and How Your Dog Loves You,” one of the latest books to plumb the nature of dogs, but she helped inspire it. And as I scratched behind her ears, it was easy to see why. First, she fixed on me with imploring doggy eyes, asking for my attention. Then, every time I stopped scratching she nudged her nose under my hand and flipped it up. I speak a little dog, but the message would have been clear even if I didn’t: Don’t stop. We were in the home office of Clive Wynne, a psychologist at Arizona State University who specializes in dog behavior. He belongs to Xephos, a mixed breed that the Wynne family found in a shelter in 2012. Dr. Wynne’s book is an extended argument about what makes dogs special — not how smart they are, but how friendly they are. Xephos’ shameless and undiscriminating affection affected both his heart and his thinking. As Xephos nose-nudged me again, Dr. Wynne was describing genetic changes that occurred at some point in dog evolution that he says explain why dogs are so sociable with members of other species. “Hey,” Dr. Wynne said to her as she tilted her head to get the maximum payoff from my efforts, “how long have you had these genes?” No one disputes the sociability of dogs. But Dr. Wynne doesn’t agree with the scientific point of view that dogs have a unique ability to understand and communicate with humans. He thinks they have a unique capacity for interspecies love, a word that he has decided to use, throwing aside decades of immersion in scientific jargon. © 2019 The New York Times Company

Keyword: Sexual Behavior; Language
Link ID: 26848 - Posted: 11.23.2019

Jon Hamilton When we hear a sentence, or a line of poetry, our brains automatically transform the stream of sound into a sequence of syllables. But scientists haven't been sure exactly how the brain does this. Now, researchers from the University of California, San Francisco, think they've figured it out. The key is detecting a rapid increase in volume that occurs at the beginning of a vowel sound, they report Wednesday in Science Advances. "Our brain is basically listening for these time points and responding whenever they occur," says Yulia Oganian, a postdoctoral scholar at UCSF. The finding challenges a popular idea that the brain monitors speech volume continuously to detect syllables. Instead, it suggests that the brain periodically "samples" spoken language looking for specific changes in volume. The finding is "in line" with a computer model designed to simulate the way a human brain decodes speech, says Oded Ghitza, a research professor in the biomedical engineering department at Boston University who was not involved in the study. Detecting each rapid increase in volume associated with a syllable gives the brain, or a computer, an efficient way to deal with the "stream" of sound that is human speech, Ghitza says. And syllables, he adds, are "the basic Lego blocks of language." Oganian's study focused on a part of the brain called the superior temporal gyrus. "It's an area that has been known for about 150 years to be really important for speech comprehension," Oganian says. "So we knew if you can find syllables somewhere, it should be there." The team studied a dozen patients preparing for brain surgery to treat severe epilepsy. As part of the preparation, surgeons had placed electrodes over the area of the brain involved in speech. © 2019 npr

Keyword: Language; Hearing
Link ID: 26841 - Posted: 11.21.2019

By Nicholas Bakalar People who never learned to read and write may be at increased risk for dementia. Researchers studied 983 adults 65 and older with four or fewer years of schooling. Ninety percent were immigrants from the Dominican Republic, where there were limited opportunities for schooling. Many had learned to read outside of school, but 237 could not read or write. Over an average of three and a half years, the participants periodically took tests of memory, language and reasoning. Illiterate men and women were 2.65 times as likely as the literate to have dementia at the start of the study, and twice as likely to have developed it by the end. Illiterate people, however, did not show a faster rate of decline in skills than those who could read and write. The analysis, in Neurology, controlled for sex, hypertension, diabetes, heart disease and other dementia risk factors. “Early life exposures and early life social opportunities have an impact on later life,” said the senior author, Jennifer J. Manly, a professor of neuropsychology at Columbia. “That’s the underlying theme here. There’s a life course of exposures and engagements and opportunities that lead to a healthy brain later in life.” “We would like to expand this research to other populations,” she added. “Our hypothesis is that this is relevant and consistent across populations of illiterate adults.” © 2019 The New York Times Company

Keyword: Language; Alzheimers
Link ID: 26838 - Posted: 11.21.2019

By Knvul Sheikh Shortly after the birth of her first son, Monika Jones learned that he had a rare neurological condition that made one side of his brain abnormally large. Her son, Henry, endured hundreds of seizures a day. Despite receiving high doses of medication, his little body seemed like a rag doll as one episode blended into another. He required several surgeries, starting when he was 3 1/2 months old, eventually leading to a complete anatomical hemispherectomy, or the removal of half of his brain, when he turned 3. The procedure was first developed in the 1920s to treat malignant brain tumors. But its success in children who have brain malformations, intractable seizures or diseases where damage is confined to half the brain, has astonished even seasoned scientists. After the procedure, many of the children are able to walk, talk, read and do everyday tasks. Roughly 20 percent of patients who have the procedure go on to find gainful employment as adults. Now, research published Tuesday in the journal Cell Reports suggests that some individuals recover so well from the surgery because of a reorganization in the remaining half of the brain. Scientists identified the variety of networks that pick up the slack for the removed tissue, with some of the brain’s specialists learning to operate like generalists. “The brain is remarkably plastic,” said Dorit Kliemann, a cognitive neuroscientist at the California Institute of Technology, and the first author of the study. “It can compensate for dramatic loss of brain structure, and in some cases the remaining networks can support almost typical cognition.” The study was partially funded by a nonprofit organization that Mrs. Jones and her husband set up to advocate for others who need surgery to stop seizures. The study’s findings could provide encouragement for those seeking hemispherectomies beyond early childhood. © 2019 The New York Times Company

Keyword: Development of the Brain; Laterality
Link ID: 26837 - Posted: 11.20.2019

By Robert Martone We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains. Electrical activity from the brains of a pair of human subjects was transmitted to the brain of a third individual in the form of magnetic signals, which conveyed an instruction to perform a task in a particular manner. This study opens the door to extraordinary new means of human collaboration while, at the same time, blurring fundamental notions about individual identity and autonomy in disconcerting ways. Direct brain-to-brain communication has been a subject of intense interest for many years, driven by motives as diverse as futurist enthusiasm and military exigency. In his book Beyond Boundaries one of the leaders in the field, Miguel Nicolelis, described the merging of human brain activity as the future of humanity, the next stage in our species’ evolution. (Nicolelis serves on Scientific American’s board of advisers.) He has already conducted a study in which he linked together the brains of several rats using complex implanted electrodes known as brain-to-brain interfaces. Nicolelis and his co-authors described this achievement as the first “organic computer” with living brains tethered together as if they were so many microprocessors. The animals in this network learned to synchronize the electrical activity of their nerve cells to the same extent as those in a single brain. The networked brains were tested for things such as their ability to discriminate between two different patterns of electrical stimuli, and they routinely outperformed individual animals. © 2019 Scientific American

Keyword: Robotics; Language
Link ID: 26770 - Posted: 10.30.2019

By Owain Clarke BBC Wales health correspondent World-leading research is helping scientists find new ways of trying to help younger people who have had a stroke get back to work. The study led by Manchester Metropolitan University found the speed a patient can walk is a major factor in determining how likely they are able to return to the workplace. Researchers have been working with physiotherapists and patients in Wales. It includes moving rehabilitation outdoors, including the Brecon Beacons. It is hoped it could lead to new rehabilitation methods being developed to target younger stroke patients. The average age to have a stroke in the UK is 72 for men and 78 for women. But there has been a 40% worldwide rise in people under 65 who have strokes in the last decade, according to the researchers. Image copyright Manchester Metropolitan University Image caption Researchers are studying the skeletons of stroke patients to see how joints perform when they walk What does the science say? It looked at 46 patients across Wales who had a stroke when younger than 65 years old and only 23% were able to return to work It found walking speed was a key predictor of whether a younger adult who has had a stroke could return to work They calculated a walking speed threshold of 0.93m/s (3ft a second) was a good benchmark for the likelihood of returning to work - and as a result this could be a goal set during rehabilitation As well as looking at the best environment for younger patients to recover in, it is now looking at using CGI technology to study joints to find out how stroke patients walk Nikki Tomkinson had a stroke at 53. "The world started shifting" while she was out driving in Cardiff. © 2019 BBC

Keyword: Stroke
Link ID: 26759 - Posted: 10.28.2019

By Sofie Bates Make some noise for the white bellbirds of the Brazilian Amazon, now the bird species with the loudest known mating call. The birds (Procnias albus) reach about 125 decibels on average at the loudest point in one of their songs, researchers report October 21 in Current Biology. Calls of the previous record-holder — another Amazonian bird called the screaming piha (Lipaugus vociferans) — maxed out around 116 decibels on average. This difference means that bellbirds can generate a soundwave with triple the pressure of that made by pihas, says Jeff Podos, a behavioral ecologist at the University of Massachusetts Amherst, who did the research along with ornithologist Mario Cohn-Haft, of the National Institute of Amazon Research in Manaus, Brazil. The team measured sound intensity from three pihas and eight bellbirds. Each sounded off at different distances from the scientists. So to make an accurate comparison, the researchers used rangefinder binoculars, with lasers to measure distance, to determine how far away each bird was. Then, they calculated how loud the sound would be a meter from each bird to crown a winner. The small white bellbird, which weighs less than 250 grams, appears to be built for creating loud sounds, with thick abdominal muscles and a beak that opens extra wide. “Having this really wide beak helps their anatomy be like a musical instrument,” Podos says. Being the loudest may come with a cost: White bellbirds can’t hold a note for long because they run out of air in their lungs. Their loudest call sounds like two staccato beats of an air horn while the calls of screaming pihas gradually build to the highest point. © Society for Science & the Public 2000–2019

Keyword: Sexual Behavior; Hearing
Link ID: 26733 - Posted: 10.22.2019

/ By Nechama Moring The first time my then-partner threw me against a wall, I blamed myself. I was late coming home from work, and I hadn’t even greeted him when I walked through our door. I immediately started complaining about the unwashed dishes and food scraps littering our kitchen. He was angry, shouting at me, and then I felt his arms around me, lifting me slightly. I blacked out when the back of my head hit the kitchen wall. The nature of abuse is that it escalates, and soon my partner was routinely injuring my head, having learned that my hair would effectively hide any bruises or evidence. Over the course of the last year of our relationship, I probably sustained at least three concussions, though none were formally diagnosed. My previously infrequent migraines became almost daily realities, and my work performance tanked, along with my concentration. Simple tasks became overwhelming. Thoughts slipped from my head before I was able to act on them. I lost my ability to form coherent sentences, and I struggled to find words for even mundane items: train, telephone, exit. Exit. I couldn’t plan for shit. I am part of what Eve Valera calls an “invisible public health epidemic” of untreated traumatic brain injuries among survivors of intimate partner violence. Valera, an assistant professor in psychiatry at Harvard Medical School who runs a brain-imaging research lab at Massachusetts General Hospital, estimates that millions of women and people of marginalized genders have suffered from both intimate partner violence and untreated concussions. Yet concussions — a form of traumatic brain injury — are generally viewed as a sports-related problem. Concussion research has focused primarily on the relatively tiny population of men who play professional football. Copyright 2019 Undark

Keyword: Brain Injury/Concussion
Link ID: 26720 - Posted: 10.18.2019

Nicoletta Lanese Cell transplantation therapy offers a promising route to recovery after stroke, but the grafted cells face a harsh environment, with elevated levels of free radicals and proinflammatory cytokines, compromised blood supply, and degraded neural connectivity, says Shan Ping Yu, a neurology researcher at Emory University School of Medicine. He and his colleagues aimed to build a new tool to help stem cells integrate with host neural circuitry after implantation. Scientists have long known that stimulating transplanted neural stem cells encourages them to differentiate into neurons and connect with nearby host cells. Many researchers turn to optogenetics to excite grafted stem cells, but because light travels poorly through dense tissue, the technique requires researchers to stick a laser into their subjects’ brains. So Yu and his coauthors turned instead to a type of enzyme that grants fireflies and jellyfish their glow: luciferase. “These proteins carry their own light, so they do not need a light source,” says Yu. The researchers injected neural progenitor cells that had been derived from induced pluripotent stem cells (iPSCs) into the brains of mouse models of stroke. The cells were genetically engineered to express a fusion protein called luminopsin 3 (LMO3), crafted from the bioluminescent enzyme Gaussia luciferase and the light-sensitive protein VChR1. LMO3 activates in response to either physical light or a molecule called CTZ, which can be delivered noninvasively through the nose into the brain tissue. The fusion protein can be hooked up to either excitatory or inhibitory channels in the neurons to either stimulate or tamp down the cells’ function. Yu and his colleagues dubbed the new technique “optochemogenetics.” © 1986–2019 The Scientist.

Keyword: Stroke
Link ID: 26712 - Posted: 10.17.2019