Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 688

By Mekado Murphy Creating an audioscape for a movie about a musician losing his hearing is more complicated than it may seem. The filmmakers behind the new drama “Sound of Metal” wanted to take audiences into the experience of its lead character, Ruben (Riz Ahmed), a punk-metal drummer who is forced to look at his life differently as he goes deaf. Judging by the overwhelmingly positive reviews, the filmmakers pulled off that difficult feat. In The New York Times, Jeannette Catsoulis raved about “an extraordinarily intricate sound design that allows us to borrow Ruben’s ears.” The film (streaming on Amazon) often places us in Ruben’s aural perspective as he navigates his new reality. (It’s worth watching with headphones or a good sound system.) “I had many conversations with people who have lost their hearing and not two people’s experience is the same,” said Darius Marder, the film’s co-writer and director. “But one thing that’s pretty much true for all people who are deaf is that they don’t lose sound entirely. It isn’t silence.” Instead, Marder and his sound designer, Nicolas Becker, wanted to capture those low-frequency vibrations and other tones. The approach was adjusted for different moments in Ruben’s experience. In separate Zoom interviews, Marder and Becker focused on three scenes as they spoke about some of the techniques and ideas they used to tap into Ruben’s aural experience, including putting microphones inside skulls and mouths. If the first times there’s a notable change in Ruben’s hearing comes before a show, as he is setting up the merchandise table with his bandmate and girlfriend, Lou (Olivia Cooke). At one point, he experiences a high-pitched ringing, then voices are muffled. Ahmed’s response in that moment isn’t just acting. The filmmakers had custom-fit earpieces made for the actor so they could feed him a high-frequency sound they had created. © 2020 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27606 - Posted: 12.05.2020

By Lucy Hicks Ogre-faced spiders might be an arachnophobe’s worst nightmare. The enormous eyes that give them their name allow them to see 2000 times better than we can at night. And these creepy crawlers are lightning-fast predators, snatching prey in a fraction of a second with mini, mobile nets. Now, new research suggests these arachnids use their legs not only to scuttle around, but also to hear. In light of their excellent eyesight, this auditory skill “is a surprise,” says George Uetz, who studies the behavioral ecology of spiders at the University of Cincinnati and wasn’t involved in the new research. Spiders don’t have ears—generally a prerequisite for hearing. So, despite the vibration-sensing hairs and receptors on most arachnids’ legs, scientists long thought spiders couldn’t hear sound as it traveled through the air, but instead felt vibrations through surfaces. The first clue they might be wrong was a 2016 study that found that a species of jumping spider can sense vibrations in the air from sound waves. Enter the ogre-faced spider. Rather than build a web and wait for their prey, these fearsome hunters “take a much more active role,” says Jay Stafstrom, a sensory ecologist at Cornell University. The palm-size spiders hang upside down from small plants on a silk line and create a miniweb across their four front legs, which they use as a net to catch their next meal. The spiders either lunge at bugs wandering below or flip backward to ensnare flying insects’ midair. © 2020 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27559 - Posted: 10.31.2020

Frank R. Lin, M.D., Ph.D. When I was going through my otolaryngology residency at Johns Hopkins in the early 2000s, I was struck by the disparity between how hearing loss was managed in children and in older adults. In the case of the child, it was a medical priority to ensure access to a hearing aid so he or she could communicate optimally at home and in school, and such devices were covered by insurance. This approach was justified based on extensive research demonstrating that hearing loss could have a substantial impact on a child’s cognitive and brain development, with lifetime consequences for educational and vocational achievement. For the older adult, the approach was radically different, even if the degree of hearing impairment was the same as in the child. The adult would be reassured that the deficit was to be expected, based on his or her age, and told that a hearing aid, if desired, would represent an out-of-pocket expense averaging about $4,000. Medicare provided no coverage for hearing aids. There was no robust research demonstrating meaningful consequences of hearing loss for older adults, as there was for children, and the clinical approach was typically guided by the notion that it was a very common, and hence inconsequential, aspect of aging. But this approach didn’t make sense, given what I had observed clinically. Older adults with hearing loss recounted to me their sense of isolation and loneliness, and the mental fatigue of constantly concentrating in trying to follow conversations. Family members would often describe a decline in patients’ general well-being and mental acuity as they struggled to hear. For those who obtained effective treatment for their hearing loss with hearing aids or a cochlear implant, the effects were often equally dramatic. Patients spoke of reengaging with family, no longer getting fatigued from straining to listen, and becoming their “old selves” again. If hearing was fundamentally important for children and represented a critical sensory input that could affect brain function, wouldn’t loss of hearing have corresponding implications for the aging brain and its function? © 2020 The Dana Foundation.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 27525 - Posted: 10.16.2020

By Cathleen O’Grady Tinnitus—a constant ringing or buzzing in the ears that affects about 15% of people—is difficult to understand and even harder to treat. Now, scientists have shown shocking the tongue—combined with a carefully designed sound program—can reduce symptoms of the disorder, not just while patients are being treated, but up to 1 year later. It’s “really important” work, says Christopher Cederroth, a neurobiologist at the University of Nottingham, University Park, who was not involved with the study. The finding, he says, joins other research that has shown “bimodal” stimulation—which uses sound alongside some kind of gentle electrical shock—can help the brain discipline misbehaving neurons. Hubert Lim, a biomedical engineer at the University of Minnesota, Twin Cities, hit on the role of the tongue in tinnitus by accident. A few years ago, he experimented with using a technique called deep brain stimulation to restore his patients’ hearing. When he inserted a pencil-size rod covered in electrodes directly into the brains of five patients, some of those electrodes landed slightly outside the target zone—a common problem with deep brain stimulation, Lim says. Later, when he started up the device to map out its effects on the brain, a patient who had been bothered by ringing ears for many years, said, “Oh, my tinnitus! I can’t hear my tinnitus,” Lim recalls. With certain kinds of tinnitus, people hear real sounds. For instance, there might be repeated muscular contractions in the ear, Lim says. But for many people, it’s the brain that’s to blame, perceiving sounds that aren’t there. One potential explanation for the effect is that hearing loss causes the brain to overcompensate for the frequencies it can no longer hear. © 2020 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 27517 - Posted: 10.10.2020

By Robert Martone When a concert opens with a refrain from your favorite song, you are swept up in the music, happily tapping to the beat and swaying with the melody. All around you, people revel in the same familiar music. You can see that many of them are singing, the lights flashing to the rhythm, while other fans are clapping in time. Some wave their arms over their head, and others dance in place. The performers and audience seem to be moving as one, as synchronized to one another as the light show is to the beat. A new paper in the journal NeuroImage has shown that this synchrony can be seen in the brain activities of the audience and performer. And the greater the degree of synchrony, the study found, the more the audience enjoys the performance. This result offers insight into the nature of musical exchanges and demonstrates that the musical experience runs deep: we dance and feel the same emotions together, and our neurons fire together as well. In the study, a violinist performed brief excerpts from a dozen different compositions, which were videotaped and later played back to a listener. Researchers tracked changes in local brain activity by measuring levels of oxygenated blood. (More oxygen suggests greater activity, because the body works to keep active neurons supplied with it.) Musical performances caused increases in oxygenated blood flow to areas of the brain related to understanding patterns, interpersonal intentions and expression. Data for the musician, collected during a performance, was compared to those for the listener during playback. In all, there were 12 selections of familiar musical works, including “Edelweiss,” Franz Schubert’s “Ave Maria,” “Auld Lang Syne” and Ludwig van Beethoven’s “Ode to Joy.” The brain activities of 16 listeners were compared to that of a single violinist. © 2020 Scientific American,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27277 - Posted: 06.03.2020

A team of researchers has generated a developmental map of a key sound-sensing structure in the mouse inner ear. Scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health, and their collaborators analyzed data from 30,000 cells from mouse cochlea, the snail-shaped structure of the inner ear. The results provide insights into the genetic programs that drive the formation of cells important for detecting sounds. The study also sheds light specifically on the underlying cause of hearing loss linked to Ehlers-Danlos syndrome and Loeys-Dietz syndrome. The study data is shared on a unique platform open to any researcher, creating an unprecedented resource that could catalyze future research on hearing loss. Led by Matthew W. Kelley, Ph.D., chief of the Section on Developmental Neuroscience at the NIDCD, the study appeared online in Nature Communications(link is external). The research team includes investigators at the University of Maryland School of Medicine, Baltimore; Decibel Therapeutics, Boston; and King’s College London. “Unlike many other types of cells in the body, the sensory cells that enable us to hear do not have the capacity to regenerate when they become damaged or diseased,” said NIDCD Director Debara L. Tucci, M.D., who is also an otolaryngology-head and neck surgeon. “By clarifying our understanding of how these cells are formed in the developing inner ear, this work is an important asset for scientists working on stem cell-based therapeutics that may treat or reverse some forms of inner ear hearing loss.”

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 27268 - Posted: 05.29.2020

Oliver Wainwright Some whisper gently into the microphone, while tapping their nails along the spine of a book. Others take a bar of soap and slice it methodically into tiny cubes, letting the pieces clatter into a plastic tray. There are those who dress up as doctors and pretend to perform a cranial nerve exam, and the ones who eat food as noisily as they can, recording every crunch and slurp in 3D stereo sound. To an outsider, the world of ASMR videos can be a baffling, kooky place. In a fast-growing corner of the internet, millions of people are watching each other tap, rattle, stroke and whisper their way through hours of homemade videos, with the aim of being lulled to sleep, or in the hope of experiencing “the tingles” – AKA, the autonomous sensory meridian response. “It feels like a rush of champagne bubbles at the top of your head,” says curator James Taylor-Foster. “There’s a mild sense of euphoria and a feeling of deep calm.” Taylor-Foster has spent many hours trawling the weirdest depths of YouTube in preparation for a new exhibition, Weird Sensation Feels Good, at ArkDes, Sweden’s national centre for architecture and design, on what he sees as one of the most important creative movements to emerge from the internet. (Though the museum has been closed due to the coronavirus pandemic, the show will be available to view online.) It will be the first major exhibition about ASMR, a term that was coined a decade ago when cybersecurity expert Jennifer Allen was looking for a word to describe the warm effervescence she felt in response to certain triggers. She had tried searching the internet for things like “tingling head and spine” or “brain orgasm”. In 2009, she hit upon a post on a health message board titled WEIRD SENSATION FEELS GOOD. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 27169 - Posted: 04.04.2020

Jon Hamilton A song fuses words and music. Yet the human brain can instantly separate a song's lyrics from its melody. And now scientists think they know how this happens. A team led by researchers at McGill University reported in Science Thursday that song sounds are processed simultaneously by two separate brain areas – one in the left hemisphere and one in the right. "On the left side you can decode the speech content but not the melodic content, and on the right side you can decode the melodic content but not the speech content," says Robert Zatorre, a professor at McGill University's Montreal Neurological Institute. The finding explains something doctors have observed in stroke patients for decades, says Daniela Sammler, a researcher at the Max Planck Institute for Cognition and Neurosciences in Leipzig, Germany, who was not involved in the study. "If you have a stroke in the left hemisphere you are much more likely to have a language impairment than if you have a stroke in the right hemisphere," Sammler says. Moreover, brain damage to certain areas of the right hemisphere can affect a person's ability to perceive music. By subscribing, you agree to NPR's terms of use and privacy policy. NPR may share your name and email address with your NPR station. See Details. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. The study was inspired by songbirds, Zatorre says. Studies show that their brains decode sounds using two separate measures. One assesses how quickly a sound fluctuates over time. The other detects the frequencies in a sound. © 2020 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27082 - Posted: 02.28.2020

By Jillian Kramer Scientists often test auditory processing in artificial, silent settings, but real life usually comes with a background of sounds like clacking keyboards, chattering voices and car horns. Recently researchers set out to study such processing in the presence of ambient sound—specifically the even, staticlike hiss of white noise. Their result is counterintuitive, says Tania Rinaldi Barkat, a neuroscientist at the University of Basel: instead of impairing hearing, a background of white noise made it easier for mice to differentiate between similar tones. Barkat is senior author of the new study, published last November in Cell Reports. It is easy to distinguish notes on opposite ends of a piano keyboard. But play two side by side, and even the sharpest ears might have trouble telling them apart. This is because of how the auditory pathway processes the simplest sounds, called pure frequency tones: neurons close together respond to similar tones, but each neuron responds better to one particular frequency. The degree to which a neuron responds to a certain frequency is called its tuning curve. The researchers found that playing white noise narrowed neurons’ frequency tuning curves in mouse brains. “In a simplified way, white noise background—played continuously and at a certain sound level—decreases the response of neurons to a tone played on top of that white noise,” Barkat says. And by reducing the number of neurons responding to the same frequency at the same time, the brain can better distinguish between similar sounds. © 2020 Scientific American,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27074 - Posted: 02.26.2020

By Kim Tingley Hearing loss has long been considered a normal, and thus acceptable, part of aging. It is common: Estimates suggest that it affects two out of three adults age 70 and older. It is also rarely treated. In the U.S., only about 14 percent of adults who have hearing loss wear hearing aids. An emerging body of research, however, suggests that diminished hearing may be a significant risk factor for Alzheimer’s disease and other forms of dementia — and that the association between hearing loss and cognitive decline potentially begins at very low levels of impairment. In November, a study published in the journal JAMA Otolaryngology — Head and Neck Surgery examined data on hearing and cognitive performance from more than 6,400 people 50 and older. Traditionally, doctors diagnose impairment when someone experiences a loss in hearing of at least 25 decibels, a somewhat arbitrary threshold. But for the JAMA study, researchers included hearing loss down to around zero decibels in their analysis and found that they still predicted correspondingly lower scores on cognitive tests. “It seemed like the relationship starts the moment you have imperfect hearing,” says Justin Golub, the study’s lead author and an ear, nose and throat doctor at the Columbia University Medical Center and NewYork-Presbyterian. Now, he says, the question is: Does hearing loss actually cause the cognitive problems it has been associated with and if so, how? Preliminary evidence linking dementia and hearing loss was published in 1989 by doctors at the University of Washington, Seattle, who compared 100 patients with Alzheimer’s-like dementia with 100 demographically similar people without it and found that those who had dementia were more likely to have hearing loss, and that the extent of that loss seemed to correspond with the degree of cognitive impairment. But that possible connection wasn’t rigorously investigated until 2011, when Frank Lin, an ear, nose and throat doctor at Johns Hopkins School of Medicine, and colleagues published the results of a longitudinal study that tested the hearing of 639 older adults who were dementia-free and then tracked them for an average of nearly 12 years, during which time 58 had developed Alzheimer’s or another cognitive impairment. They discovered that a subject’s likelihood of developing dementia increased in direct proportion to the severity of his or her hearing loss at the time of the initial test. The relationship seems to be “very, very linear,” Lin says, meaning that the greater the hearing deficit, the greater the risk a person will develop the condition. © 2020 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 27057 - Posted: 02.20.2020

By Jane E. Brody Every now and then I write a column as much to push myself to act as to inform and motivate my readers. What follows is a prime example. Last year in a column entitled “Hearing Loss Threatens Mind, Life and Limb,” I summarized the current state of knowledge about the myriad health-damaging effects linked to untreated hearing loss, a problem that afflicts nearly 38 million Americans and, according to two huge recent studies, increases the risk of dementia, depression, falls and even cardiovascular diseases. Knowing that my own hearing leaves something to be desired, the research I did for that column motivated me to get a proper audiology exam. The results indicated that a well-fitted hearing aid could help me hear significantly better in the movies, theater, restaurants, social gatherings, lecture halls, even in the locker room where the noise of hair dryers, hand dryers and swimsuit wringers often challenges my ability to converse with my soft-spoken friends. That was six months ago, and I’ve yet to go back to get that recommended hearing aid. Now, though, I have a new source of motivation. A large study has documented that even among people with so-called normal hearing, those with only slightly poorer hearing than perfect can experience cognitive deficits. That means a diminished ability to get top scores on standardized tests of brain function, like matching numbers with symbols within a specified time period. But while you may never need or want to do that, you most likely do want to maximize and maintain cognitive function: your ability to think clearly, plan rationally and remember accurately, especially as you get older. While under normal circumstances, cognitive losses occur gradually as people age, the wisest course may well be to minimize and delay them as long as possible and in doing so, reduce the risk of dementia. Hearing loss is now known to be the largest modifiable risk factor for developing dementia, exceeding that of smoking, high blood pressure, lack of exercise and social isolation, according to an international analysis published in The Lancet in 2017. © 2019 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26923 - Posted: 12.30.2019

By Carolyn Gramling Exceptionally preserved skulls of a mammal that lived alongside the dinosaurs may be offering scientists a glimpse into the evolution of the middle ear. The separation of the three tiny middle ear bones — known popularly as the hammer, anvil and stirrup — from the jaw is a defining characteristic of mammals. The evolutionary shift of those tiny bones, which started out as joints in ancient reptilian jaws and ultimately split from the jaw completely, gave mammals greater sensitivity to sound, particularly at higher frequencies (SN: 3/20/07). But finding well-preserved skulls from ancient mammals that can help reveal the timing of this separation is a challenge. Now, scientists have six specimens — four nearly complete skeletons and two fragmented specimens — of a newly described, shrew-sized critter dubbed Origolestes lii that lived about 123 million years ago. O. lii was part of the Jehol Biota, an ecosystem of ancient wetlands-dwellers that thrived between 133 million and 120 million years ago in what’s now northeastern China. The skulls on the nearly complete skeletons were so well-preserved that they were able to be examined in 3-D, say paleontologist Fangyuan Mao of the Chinese Academy of Sciences in Beijing and colleagues. That analysis suggests that O. lii’s middle ear bones were fully separated from its jaw, the team reports online December 5 in Science. Fossils from an older, extinct line of mammals have shown separated middle ear bones, but this newfound species would be the first of a more recent lineage to exhibit this evolutionary advance. © Society for Science & the Public 2000–2019

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26880 - Posted: 12.07.2019

By Jade Wu What do the sounds of whispering, crinkling paper, and tapping fingernails have in common? What about the sight of soft paint brushes on skin, soap being gently cut to pieces, and hand movements like turning the pages of a book? Well, if you are someone who experiences the autonomous sensory meridian response—or ASMR, for short—you may recognize these seemingly ordinary sounds and sights as “triggers” for the ASMR experience. No idea what I’m talking about? Don’t worry, you’re actually in the majority. Most people, myself included, aren’t affected by these triggers. But what happens to those who are? What is the ASMR experience? It’s described as a pleasantly warm and tingling sensation that starts on the scalp and moves down the neck and spine. ASMR burst onto the Internet scene in 2007, according to Wikipedia, when a woman with the username “okaywhatever” described her experience of ASMR sensations in an online health discussion forum. At the time, there was no name for this weird phenomenon. But by 2010, someone called Jennifer Allen had named the experience, and from there, ASMR became an Internet sensation. Today, there are hundreds of ASMR YouTubers who collectively post over 200 videos of ASMR triggers per day, as reported by a New York Times article in April, 2019. Some ASMR YouTubers have become bona fide celebrities with ballooning bank accounts, millions of fans, and enough fame to be stopped on the street for selfies. There’s been some controversy. Some people doubt whether this ASMR experience is “real,” or just the result of recreational drugs or imagined sensations. Some have chalked the phenomenon up to a symptom of loneliness among Generation Z, who get their dose of intimacy from watching strangers pretend to do their makeup without having to interact with real people. Some people are even actively put off by ASMR triggers. One of my listeners, Katie, said that most ASMR videos just make her feel agitated. But another listener, Candace, shared that she has been unknowingly chasing ASMR since she was a child watching BBC. © 2019 Scientific American

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26873 - Posted: 12.05.2019

Adam Miller · CBC News · New research is shedding light on how the brain interacts with music. It also highlights how challenging it is to study the issue effectively due to the highly personalized nature of how we interpret it. "Music is very subjective," says Dr. Daniel Levitin, a professor of neuroscience and music at McGill University in Montreal and author of the bestselling book This is Your Brain on Music. "People have their own preferences and their own experience and to some extent baggage that they bring to all of this — it is challenging." Levitin says there are more researchers studying the neurological effects of music now than ever before. From 1998 to 2008 there were only four media reports of evidence-based uses of music in research, while from 2009 to 2019 there were 185, Levitin said in a recent paper for the journal Music and Medicine. It's a "great time for music and brain research" because more people are well-trained and skilled at conducting rigorous experiments, according to Levitin. Emerging research reveals challenges A new study by researchers in Germany and Norway used artificial intelligence to analyze levels of "uncertainty" and "surprise" in 80,000 chords from 745 commercially successful pop songs on the U.S. Billboard charts. The research, published Thursday in Current Biology, found that chords provided more pleasure to the listener both when there is uncertainty in anticipating what comes next, and from the surprise the music elicits when the chords deviate from expectations. ©2019 CBC/Radio-Canada

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26807 - Posted: 11.09.2019

By Jon Cohen On a lightly snowing Sunday evening, a potential participant in Denis Rebrikov’s controversial plans to create gene-edited babies meets with me at a restaurant in a Moscow suburb. She does not want to be identified beyond her patronymic, Yevgenievna. We sit at a corner table in an empty upstairs section of the restaurant while live Georgian music plays downstairs. Yevgenievna, in her late 20s, cannot hear it—or any music. She has been deaf since birth. But with the help of a hearing aid that’s linked to a wireless microphone, which she places on the table, she can hear some sounds, and she is adept at reading lips. She speaks to me primarily in Russian, through a translator, but she is also conversant in English. Yevgenievna and her husband, who is partially deaf, want to have children who will not inherit hearing problems. There is nothing illicit about our discussion: Russia has no clear regulations prohibiting Rebrikov’s plan to correct the deafness mutation in an in vitro fertilization (IVF) embryo. But Yevgenievna is uneasy about publicity. “We were told if we become the first couple to do this experiment we’ll become famous, and HBO already tried to reach me,” Yevgenievna says. “I don’t want to be well known like an actor and have people bother me.” She is also deeply ambivalent about the procedure itself, a pioneering and potentially risky use of the CRISPR genome editor. The couple met on vk.com, a Russian Facebook of sorts, in a chat room for people who are hearing impaired. Her husband could hear until he was 15 years old, and still gets by with hearing aids. They have a daughter—Yevgenievna asks me not to reveal her age—who failed a hearing test at birth. Doctors initially believed it was likely a temporary problem produced by having a cesarean section, but 1 month later, her parents took her to a specialized hearing clinic. “We were told our daughter had zero hearing,” Yevgenievna says. “I was shocked, and we cried.” © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26732 - Posted: 10.22.2019

By Kelly Servick The brain has a way of repurposing unused real estate. When a sense like sight is missing, corresponding brain regions can adapt to process new input, including sound or touch. Now, a study of blind people who use echolocation—making clicks with their mouths to judge the location of objects when sound bounces back—reveals a degree of neural repurposing never before documented. The research shows that a brain area normally devoted to the earliest stages of visual processing can use the same organizing principles to interpret echoes as it would to interpret signals from the eye. In sighted people, messages from the retina are relayed to a region at the back of the brain called the primary visual cortex. We know the layout of this brain region corresponds to the layout of physical space around us: Points that are next to each other in our environment project onto neighboring points on the retina and activate neighboring points in the primary visual cortex. In the new study, researchers wanted to know whether blind echolocators used this same type of spatial mapping in the primary visual cortex to process echoes. The researchers asked blind and sighted people to listen to recordings of a clicking sound bouncing off an object placed at different locations in a room while they lay in a functional magnetic resonance imaging scanner. The researchers found that expert echolocators—unlike sighted people and blind people who don’t use echolocation—showed activation in the primary visual cortex similar to that of sighted people looking at visual stimuli. © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 26663 - Posted: 10.02.2019

Ian Sample Science editor When Snowball the sulphur-crested cockatoo revealed his first dance moves a decade ago he became an instant sensation. The foot-tapping, head-bobbing bird boogied his way on to TV talkshows and commercials and won an impressive internet audience. Block-rocking beaks: Snowball the cockatoo – reviewed by our dance critic Read more But that was merely the start. A new study of the prancing parrot points to a bird at the peak of his creative powers. In performances conducted from the back of an armchair, Snowball pulled 14 distinct moves – a repertoire that would put many humans to shame. Footage of Snowball in action shows him smashing Another One Bites the Dust by Queen and Cyndi Lauper’s Girls Just Wanna Have Fun with a dazzling routine of head-bobs, foot-lifts, body-rolls, poses and headbanging. In one move, named the Vogue, Snowball moves his head from one side of a lifted foot to another. “We were amazed,” said Aniruddh Patel, a psychology professor at Tufts University in Medford, Massachusetts. “There are moves in there, like the Madonna Vogue move, that I just can’t believe.” Advertisement “It seems that dancing to music isn’t purely a product of human culture. The fact that we see this in another animal suggests that if you have a brain with certain cognitive and neural capacities, you are predisposed to dance,” he added. It all started, as some things must, with the Backstreet Boys. In 2008, Patel, who has long studied the origins of musicality, watched a video on the internet of Snowball dancing in time to the band’s track Everybody. He contacted Irena Schulz, who owned the bird shelter where Snowball lived, and with her soon launched a study of Snowball’s dancing prowess. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26400 - Posted: 07.09.2019

By Matthew Hutson LONG BEACH, CALIFORNIA—Spies may soon have another tool to carry out their shadowy missions: a new device that uses sound to “see” around corners. David Lindell Previously, researchers developed gadgets that bounced light waves around corners to catch reflections and see things out of the line of sight. To see whether they could do something similar with sound, another group of scientists built a hardware prototype—a vertical pole adorned with off-the-shelf microphones and small car speakers. The speakers emitted a series of chirps, which bounced off a nearby wall at an angle before hitting a hidden object on another wall—a poster board cutout of the letter H. Scientists then moved their rig bit by bit, each time making more chirps, which bounced back the way they came, into the microphones. Using algorithms from seismic imaging, the system reconstructed a rough image of the letter H (above). The researchers also imaged a setup with the letters L and T and compared their acoustic results with an optical method. The optical method, which requires expensive equipment, failed to reproduce the more-distant L, and it took more than an hour, compared with just 4.5 minutes for the acoustic method. The researchers will present the work here Wednesday at the Computer Vision and Pattern Recognition conference. © 2019 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26333 - Posted: 06.18.2019

Hannah Devlin Science correspondent A mind-controlled hearing aid that allows the wearer to focus on particular voices has been created by scientists, who say it could transform the ability of those with hearing impairments to cope with noisy environments. The device mimics the brain’s natural ability to single out and amplify one voice against background conversation. Until now, even the most advanced hearing aids work by boosting all voices at once, which can be experienced as a cacophony of sound for the wearer, especially in crowded environments. Nima Mesgarani, who led the latest advance at Columbia University in New York, said: “The brain area that processes sound is extraordinarily sensitive and powerful. It can amplify one voice over others, seemingly effortlessly, while today’s hearing aids still pale in comparison.” This can severely hinder a wearer’s ability to join in conversations, making busy social occasions particularly challenging. Scientists have been working for years to resolve this problem, known as the cocktail party effect. The brain-controlled hearing aid appears to have cracked the problem using a combination of artificial intelligence and sensors designed to monitor the listener’s brain activity. The hearing aid first uses an algorithm to automatically separate the voices of multiple speakers. It then compares these audio tracks to the brain activity of the listener. Previous work by Mesgarani’s lab found that it is possible to identify which person someone is paying attention to, as their brain activity tracks the sound waves of that voice most closely. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26247 - Posted: 05.18.2019

By Maggie Koerth-Baker Where is the loudest place in America? You might think New York City, or a major airport hub, or a concert you have suddenly become too old to appreciate. But that depends on what kind of noise you’re measuring. Sound is actually a physical thing. What we perceive as noise is the result of air molecules bumping into one another, like a Newton’s cradle toy. That movement eventually reaches our eardrums, which turn that tiny wiggle into an audible signal. But human ears can’t convert all molecular motion to sound. Sometimes the particles are jostling one another too fast. Sometimes they’re too slow. Sometimes, the motion is just happening in the wrong medium — through the Earth, say, instead of through the air. And when you start listening for the sounds we can’t hear, the loudest place in America can end up being right under your feet. Scientists have tools that can detect these “silent” waves, and they’ve found a lot of noise happening all over the U.S. Those noises are made by the cracking of rocks deep in the Earth along natural fault lines and the splashing of whitecaps on the ocean. But they’re also made by our factories, power plants, mines and military. “Any kind of mechanical process is going to generate energetic waves, said Omar Marcillo, staff scientist at Los Alamos National Laboratory. “Some of that goes through the atmosphere as acoustic waves, and some goes through the ground as seismic waves.” Marcillo’s work focuses on the seismic. © 2019 ABC News Internet Ventures.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26227 - Posted: 05.11.2019