Links for Keyword: Hearing

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 735

Bill Chappell People with mild or moderate hearing loss could soon be able to buy hearing aids without a medical exam or special fitting, under a new rule being proposed by the Food and Drug Administration. The agency says 37.5 million American adults have difficulty hearing. "Today's move by FDA takes us one step closer to the goal of making hearing aids more accessible and affordable for the tens of millions of people who experience mild to moderate hearing loss," Health and Human Services Secretary Xavier Becerra said as he announced the proposed rule on Tuesday. There is no timeline yet for when consumers might be able to buy an FDA-regulated over-the-counter (OTC) hearing aid. The proposed rule is now up for 90 days of public comment. The Hearing Loss Association of America, a consumer advocacy group, welcomed the proposal. "This is one step closer to seeing OTC hearing devices on the market," Barbara Kelley, the group's executive director, said in an email to NPR. "We hope adults will be encouraged to take that important first step toward good hearing health." Advocates and lawmakers have been calling for OTC hearing aids for years, including in a big push in 2017, when Sen. Elizabeth Warren, D-Mass., and co-sponsor Sen. Chuck Grassley, R-Iowa, introduced the bipartisan Over-the-Counter Hearing Aid Act. The legislators are now praising the FDA's move. © 2021 npr

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 28042 - Posted: 10.20.2021

By Christiane Gelitz, Maddie Bender | To a chef, the sounds of lip smacking, slurping and swallowing are the highest form of flattery. But to someone with a certain type of misophonia, these same sounds can be torturous. Brain scans are now helping scientists start to understand why. People with misophonia experience strong discomfort, annoyance or disgust when they hear particular triggers. These can include chewing, swallowing, slurping, throat clearing, coughing and even audible breathing. Researchers previously thought this reaction might be caused by the brain overactively processing certain sounds. Now, however, a new study published in the Journal of Neuroscience has linked some forms of misophonia to heightened “mirroring” behavior in the brain: those affected feel distress while their brains act as if they are mimicking the triggering mouth movements. “This is the first breakthrough in misophonia research in 25 years,” says psychologist Jennifer J. Brout, who directs the International Misophonia Research Network and was not involved in the new study. The research team, led by Newcastle University neuroscientist Sukhbinder Kumar, analyzed brain activity in people with and without misophonia when they were at rest and while they listened to sounds. These included misophonia triggers (such as chewing), generally unpleasant sounds (like a crying baby), and neutral sounds. The brain's auditory cortex, which processes sound, reacted similarly in subjects with and without misophonia. But in both the resting state and listening trials, people with misophonia showed stronger connections between the auditory cortex and brain regions that control movements of the face, mouth and throat. Kumar found this connection became most active in participants with misophonia when they heard triggers specific to the condition. © 2021 Scientific American,

Related chapters from BN: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 14: Attention and Higher Cognition
Link ID: 27955 - Posted: 08.21.2021

Allison Aubrey Imagine a sound that travels with you no matter where you go. Whether it's a ring, a whoosh or a crickets-like buzz, you can't escape it. "Mine was like this high-pitched sonic sound," says Elizabeth Fraser, who developed tinnitus last fall. It came on suddenly at a time when many people delayed doctor visits due to the coronavirus pandemic. "It just felt like an invasion in my head, so I was really distressed," Fraser recalls. Tinnitus is the perception of ringing when, in fact, no external sound is being produced. "You can equate it to a phantom sound," explains Sarah Sydlowski, a doctor of audiology at Cleveland Clinic. The Centers for Disease Control and Prevention estimates that 20 million Americans have chronic tinnitus. And studies show the pandemic ushered in both new cases and a worsening of the condition among people who already had it. The British Tinnitus Association reported a surge in the number of people accessing its services, including a 256% increase in the number of web chats amid the pandemic. Elizabeth Fraser started hearing a "high-pitched sonic sound" in her ears last fall. It came on suddenly at a time when many people delayed doctor visits due to the coronavirus pandemic. "It just felt like an invasion in my head, so I was really distressed," Fraser recalls. © 2021 npr

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27877 - Posted: 06.26.2021

By Lisa Sanders, M.D. The dental hygienist greeted her longtime patient enthusiastically. Unexpectedly, the 68-year-old woman burst into tears. “I feel so bad,” she said, her voice cracking with emotion. “I’m worried I might be dying.” She was always tired, as if all her energy had been sucked out. And she felt a strange dread that something awful was happening to her. And if that weren’t enough, for the past couple of weeks she had lost much of her hearing in her right ear. She was sure she had a brain tumor — though none of her doctors thought so. After offering sympathy, the dental assistant realized she had something more to offer: “We have a dental CT scanner. Should we get a CT of your head?” The patient was amazed. Yes — she would very much like a CT scan of her head. It would cost her $150, the technician told her. At that point, it seemed like a bargain. And, just like that, it was done. And there was a mass. It wasn’t on the right side, where she thought her trouble lay. It was on the left. And it wasn’t in her ear, but in the sinus behind her cheek. That was confusing. She thanked the tech for the scan. She had an ENT and would send the images to him to see what he thought. That right ear had been giving the patient trouble for more than 20 years, she reminded her ear, nose and throat doctor in Prescott, Ariz., when she spoke with him. In her 40s she developed terrible vertigo. She was living in Atlanta then and saw an ENT there who told her she probably had Ménière’s disease, a disorder induced by increased pressure in the inner ear. The cause is unknown, though in some cases it appears to run in families. And it’s characterized by intermittent episodes of vertigo usually accompanied by a sensation of fullness in the ear, as well as tinnitus and hearing loss. These symptoms can be present from the start, but often develop over time. There’s no definitive test for the disease, though evidence of the increased pressure is sometimes visible on an M.R.I. © 2021 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27753 - Posted: 03.31.2021

By Carolyn Gramling The fin whale’s call is among the loudest in the ocean: It can even penetrate into Earth’s crust, a new study finds. Echoes in whale songs recorded by seismic instruments on the ocean floor reveal that the sound waves pass through layers of sediment and underlying rock. These songs can help probe the structure of the crust when more conventional survey methods are not available, researchers report in the Feb. 12 Science. Six songs, all from a single whale that sang as it swam, were analyzed by seismologists Václav Kuna of the Czech Academy of Sciences in Prague and John Nábělek of Oregon State University in Corvallis. They recorded the songs, lasting from 2.5 to 4.9 hours, in 2012 and 2013 with a network of 54 ocean-bottom seismometers in the northeast Pacific Ocean. The songs of fin whales (Balaenoptera physalus) can be up to 189 decibels, as noisy as a large ship. Seismic instruments detect the sound waves of the song, just like they pick up pulses from earthquakes or from air guns used for ship-based surveys. The underwater sounds can also produce seismic echoes: When sound waves traveling through the water meet the ground, some of the waves’ energy converts into a seismic wave (SN: 9/17/20). Those seismic waves can help scientists “see” underground: As the penetrating waves bounce off different rock layers, researchers can estimate the thickness of the layers. Changes in the waves’ speed can also reveal what types of rocks the waves traveled through. © Society for Science & the Public 2000–2021.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Lateralization
Link ID: 27686 - Posted: 02.13.2021

By Matthew Hutson Somehow, even in a room full of loud conversations, our brains can focus on a single voice in something called the cocktail party effect. But the louder it gets—or the older you are—the harder it is to do. Now, researchers may have figured out how to fix that—with a machine learning technique called the cone of silence. Computer scientists trained a neural network, which roughly mimics the brain’s wiring, to locate and separate the voices of several people speaking in a room. The network did so in part by measuring how long it took for the sounds to hit a cluster of microphones in the room’s center. When the researchers tested their setup with extremely loud background noise, they found that the cone of silence located two voices to within 3.7º of their sources, they reported this month at the online-only Conference on Neural Information Processing Systems. That compares with a sensitivity of only 11.5º for the previous state-of-the-art technology. When the researchers trained their new system on additional voices, it managed the same trick with eight voices—to a sensitivity of 6.3º—even if it had never heard more than four at once. Such a system could one day be used in hearing aids, surveillance setups, speakerphones, or laptops. The new technology, which can also track moving voices, might even make your Zoom calls easier, by separating out and silencing background noise, from vacuum cleaners to rambunctious children. © 2020 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27628 - Posted: 12.19.2020

By Paula Span By now, we were supposed to be swiftly approaching the day when we could walk into a CVS or Walgreens, a Best Buy or Walmart, and walk out with a pair of quality, affordable hearing aids approved by the Food and Drug Administration. Hearing aids, a widely needed but dauntingly expensive investment, cost on average $4,700 a pair. (Most people need two.) So in 2017, Congress passed legislation allowing the devices to be sold directly to consumers, without a prescription from an audiologist. The next step was for the F.D.A. to issue draft regulations to establish safety and effectiveness benchmarks for these over-the-counter devices. Its deadline: August 2020. A public comment period would follow, and then — right about now — the agency would be preparing its final rule, to take effect in May 2021. So by next summer, people with what is known as “perceived mild to moderate hearing loss” might need to spend only one-quarter of today’s price or less, maybe far less. And then we could have turned down the TV volume and stopped making dinner reservations for 5:30 p.m., when restaurants are mostly empty and conversations are still audible. “These regulations are going to help a lot of people,” said Dr. Vinay Rathi, an otolaryngologist at Massachusetts Eye and Ear. “There could be great potential for innovation.” So, where are the new rules? This long-sought alternative to the current state of hearing aid services has been delayed, perhaps one more victim of the pandemic. Of course, the agency has other crucial matters to address just now. Although the office charged with hearing aid regulations is not the one assessing Covid-19 vaccines, an F.D.A. spokesman said via email that it was dealing with “an unprecedented volume of emergency use authorizations” for diagnostics, ventilators and personal protective equipment. © 2020 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27626 - Posted: 12.15.2020

By Mekado Murphy Creating an audioscape for a movie about a musician losing his hearing is more complicated than it may seem. The filmmakers behind the new drama “Sound of Metal” wanted to take audiences into the experience of its lead character, Ruben (Riz Ahmed), a punk-metal drummer who is forced to look at his life differently as he goes deaf. Judging by the overwhelmingly positive reviews, the filmmakers pulled off that difficult feat. In The New York Times, Jeannette Catsoulis raved about “an extraordinarily intricate sound design that allows us to borrow Ruben’s ears.” The film (streaming on Amazon) often places us in Ruben’s aural perspective as he navigates his new reality. (It’s worth watching with headphones or a good sound system.) “I had many conversations with people who have lost their hearing and not two people’s experience is the same,” said Darius Marder, the film’s co-writer and director. “But one thing that’s pretty much true for all people who are deaf is that they don’t lose sound entirely. It isn’t silence.” Instead, Marder and his sound designer, Nicolas Becker, wanted to capture those low-frequency vibrations and other tones. The approach was adjusted for different moments in Ruben’s experience. In separate Zoom interviews, Marder and Becker focused on three scenes as they spoke about some of the techniques and ideas they used to tap into Ruben’s aural experience, including putting microphones inside skulls and mouths. If the first times there’s a notable change in Ruben’s hearing comes before a show, as he is setting up the merchandise table with his bandmate and girlfriend, Lou (Olivia Cooke). At one point, he experiences a high-pitched ringing, then voices are muffled. Ahmed’s response in that moment isn’t just acting. The filmmakers had custom-fit earpieces made for the actor so they could feed him a high-frequency sound they had created. © 2020 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27606 - Posted: 12.05.2020

By Lucy Hicks Ogre-faced spiders might be an arachnophobe’s worst nightmare. The enormous eyes that give them their name allow them to see 2000 times better than we can at night. And these creepy crawlers are lightning-fast predators, snatching prey in a fraction of a second with mini, mobile nets. Now, new research suggests these arachnids use their legs not only to scuttle around, but also to hear. In light of their excellent eyesight, this auditory skill “is a surprise,” says George Uetz, who studies the behavioral ecology of spiders at the University of Cincinnati and wasn’t involved in the new research. Spiders don’t have ears—generally a prerequisite for hearing. So, despite the vibration-sensing hairs and receptors on most arachnids’ legs, scientists long thought spiders couldn’t hear sound as it traveled through the air, but instead felt vibrations through surfaces. The first clue they might be wrong was a 2016 study that found that a species of jumping spider can sense vibrations in the air from sound waves. Enter the ogre-faced spider. Rather than build a web and wait for their prey, these fearsome hunters “take a much more active role,” says Jay Stafstrom, a sensory ecologist at Cornell University. The palm-size spiders hang upside down from small plants on a silk line and create a miniweb across their four front legs, which they use as a net to catch their next meal. The spiders either lunge at bugs wandering below or flip backward to ensnare flying insects’ midair. © 2020 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27559 - Posted: 10.31.2020

Frank R. Lin, M.D., Ph.D. When I was going through my otolaryngology residency at Johns Hopkins in the early 2000s, I was struck by the disparity between how hearing loss was managed in children and in older adults. In the case of the child, it was a medical priority to ensure access to a hearing aid so he or she could communicate optimally at home and in school, and such devices were covered by insurance. This approach was justified based on extensive research demonstrating that hearing loss could have a substantial impact on a child’s cognitive and brain development, with lifetime consequences for educational and vocational achievement. For the older adult, the approach was radically different, even if the degree of hearing impairment was the same as in the child. The adult would be reassured that the deficit was to be expected, based on his or her age, and told that a hearing aid, if desired, would represent an out-of-pocket expense averaging about $4,000. Medicare provided no coverage for hearing aids. There was no robust research demonstrating meaningful consequences of hearing loss for older adults, as there was for children, and the clinical approach was typically guided by the notion that it was a very common, and hence inconsequential, aspect of aging. But this approach didn’t make sense, given what I had observed clinically. Older adults with hearing loss recounted to me their sense of isolation and loneliness, and the mental fatigue of constantly concentrating in trying to follow conversations. Family members would often describe a decline in patients’ general well-being and mental acuity as they struggled to hear. For those who obtained effective treatment for their hearing loss with hearing aids or a cochlear implant, the effects were often equally dramatic. Patients spoke of reengaging with family, no longer getting fatigued from straining to listen, and becoming their “old selves” again. If hearing was fundamentally important for children and represented a critical sensory input that could affect brain function, wouldn’t loss of hearing have corresponding implications for the aging brain and its function? © 2020 The Dana Foundation.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 27525 - Posted: 10.16.2020

By Cathleen O’Grady Tinnitus—a constant ringing or buzzing in the ears that affects about 15% of people—is difficult to understand and even harder to treat. Now, scientists have shown shocking the tongue—combined with a carefully designed sound program—can reduce symptoms of the disorder, not just while patients are being treated, but up to 1 year later. It’s “really important” work, says Christopher Cederroth, a neurobiologist at the University of Nottingham, University Park, who was not involved with the study. The finding, he says, joins other research that has shown “bimodal” stimulation—which uses sound alongside some kind of gentle electrical shock—can help the brain discipline misbehaving neurons. Hubert Lim, a biomedical engineer at the University of Minnesota, Twin Cities, hit on the role of the tongue in tinnitus by accident. A few years ago, he experimented with using a technique called deep brain stimulation to restore his patients’ hearing. When he inserted a pencil-size rod covered in electrodes directly into the brains of five patients, some of those electrodes landed slightly outside the target zone—a common problem with deep brain stimulation, Lim says. Later, when he started up the device to map out its effects on the brain, a patient who had been bothered by ringing ears for many years, said, “Oh, my tinnitus! I can’t hear my tinnitus,” Lim recalls. With certain kinds of tinnitus, people hear real sounds. For instance, there might be repeated muscular contractions in the ear, Lim says. But for many people, it’s the brain that’s to blame, perceiving sounds that aren’t there. One potential explanation for the effect is that hearing loss causes the brain to overcompensate for the frequencies it can no longer hear. © 2020 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 27517 - Posted: 10.10.2020

By Robert Martone When a concert opens with a refrain from your favorite song, you are swept up in the music, happily tapping to the beat and swaying with the melody. All around you, people revel in the same familiar music. You can see that many of them are singing, the lights flashing to the rhythm, while other fans are clapping in time. Some wave their arms over their head, and others dance in place. The performers and audience seem to be moving as one, as synchronized to one another as the light show is to the beat. A new paper in the journal NeuroImage has shown that this synchrony can be seen in the brain activities of the audience and performer. And the greater the degree of synchrony, the study found, the more the audience enjoys the performance. This result offers insight into the nature of musical exchanges and demonstrates that the musical experience runs deep: we dance and feel the same emotions together, and our neurons fire together as well. In the study, a violinist performed brief excerpts from a dozen different compositions, which were videotaped and later played back to a listener. Researchers tracked changes in local brain activity by measuring levels of oxygenated blood. (More oxygen suggests greater activity, because the body works to keep active neurons supplied with it.) Musical performances caused increases in oxygenated blood flow to areas of the brain related to understanding patterns, interpersonal intentions and expression. Data for the musician, collected during a performance, was compared to those for the listener during playback. In all, there were 12 selections of familiar musical works, including “Edelweiss,” Franz Schubert’s “Ave Maria,” “Auld Lang Syne” and Ludwig van Beethoven’s “Ode to Joy.” The brain activities of 16 listeners were compared to that of a single violinist. © 2020 Scientific American,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27277 - Posted: 06.03.2020

A team of researchers has generated a developmental map of a key sound-sensing structure in the mouse inner ear. Scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health, and their collaborators analyzed data from 30,000 cells from mouse cochlea, the snail-shaped structure of the inner ear. The results provide insights into the genetic programs that drive the formation of cells important for detecting sounds. The study also sheds light specifically on the underlying cause of hearing loss linked to Ehlers-Danlos syndrome and Loeys-Dietz syndrome. The study data is shared on a unique platform open to any researcher, creating an unprecedented resource that could catalyze future research on hearing loss. Led by Matthew W. Kelley, Ph.D., chief of the Section on Developmental Neuroscience at the NIDCD, the study appeared online in Nature Communications(link is external). The research team includes investigators at the University of Maryland School of Medicine, Baltimore; Decibel Therapeutics, Boston; and King’s College London. “Unlike many other types of cells in the body, the sensory cells that enable us to hear do not have the capacity to regenerate when they become damaged or diseased,” said NIDCD Director Debara L. Tucci, M.D., who is also an otolaryngology-head and neck surgeon. “By clarifying our understanding of how these cells are formed in the developing inner ear, this work is an important asset for scientists working on stem cell-based therapeutics that may treat or reverse some forms of inner ear hearing loss.”

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 27268 - Posted: 05.29.2020

Oliver Wainwright Some whisper gently into the microphone, while tapping their nails along the spine of a book. Others take a bar of soap and slice it methodically into tiny cubes, letting the pieces clatter into a plastic tray. There are those who dress up as doctors and pretend to perform a cranial nerve exam, and the ones who eat food as noisily as they can, recording every crunch and slurp in 3D stereo sound. To an outsider, the world of ASMR videos can be a baffling, kooky place. In a fast-growing corner of the internet, millions of people are watching each other tap, rattle, stroke and whisper their way through hours of homemade videos, with the aim of being lulled to sleep, or in the hope of experiencing “the tingles” – AKA, the autonomous sensory meridian response. “It feels like a rush of champagne bubbles at the top of your head,” says curator James Taylor-Foster. “There’s a mild sense of euphoria and a feeling of deep calm.” Taylor-Foster has spent many hours trawling the weirdest depths of YouTube in preparation for a new exhibition, Weird Sensation Feels Good, at ArkDes, Sweden’s national centre for architecture and design, on what he sees as one of the most important creative movements to emerge from the internet. (Though the museum has been closed due to the coronavirus pandemic, the show will be available to view online.) It will be the first major exhibition about ASMR, a term that was coined a decade ago when cybersecurity expert Jennifer Allen was looking for a word to describe the warm effervescence she felt in response to certain triggers. She had tried searching the internet for things like “tingling head and spine” or “brain orgasm”. In 2009, she hit upon a post on a health message board titled WEIRD SENSATION FEELS GOOD. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 27169 - Posted: 04.04.2020

Jon Hamilton A song fuses words and music. Yet the human brain can instantly separate a song's lyrics from its melody. And now scientists think they know how this happens. A team led by researchers at McGill University reported in Science Thursday that song sounds are processed simultaneously by two separate brain areas – one in the left hemisphere and one in the right. "On the left side you can decode the speech content but not the melodic content, and on the right side you can decode the melodic content but not the speech content," says Robert Zatorre, a professor at McGill University's Montreal Neurological Institute. The finding explains something doctors have observed in stroke patients for decades, says Daniela Sammler, a researcher at the Max Planck Institute for Cognition and Neurosciences in Leipzig, Germany, who was not involved in the study. "If you have a stroke in the left hemisphere you are much more likely to have a language impairment than if you have a stroke in the right hemisphere," Sammler says. Moreover, brain damage to certain areas of the right hemisphere can affect a person's ability to perceive music. By subscribing, you agree to NPR's terms of use and privacy policy. NPR may share your name and email address with your NPR station. See Details. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. The study was inspired by songbirds, Zatorre says. Studies show that their brains decode sounds using two separate measures. One assesses how quickly a sound fluctuates over time. The other detects the frequencies in a sound. © 2020 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27082 - Posted: 02.28.2020

By Jillian Kramer Scientists often test auditory processing in artificial, silent settings, but real life usually comes with a background of sounds like clacking keyboards, chattering voices and car horns. Recently researchers set out to study such processing in the presence of ambient sound—specifically the even, staticlike hiss of white noise. Their result is counterintuitive, says Tania Rinaldi Barkat, a neuroscientist at the University of Basel: instead of impairing hearing, a background of white noise made it easier for mice to differentiate between similar tones. Barkat is senior author of the new study, published last November in Cell Reports. It is easy to distinguish notes on opposite ends of a piano keyboard. But play two side by side, and even the sharpest ears might have trouble telling them apart. This is because of how the auditory pathway processes the simplest sounds, called pure frequency tones: neurons close together respond to similar tones, but each neuron responds better to one particular frequency. The degree to which a neuron responds to a certain frequency is called its tuning curve. The researchers found that playing white noise narrowed neurons’ frequency tuning curves in mouse brains. “In a simplified way, white noise background—played continuously and at a certain sound level—decreases the response of neurons to a tone played on top of that white noise,” Barkat says. And by reducing the number of neurons responding to the same frequency at the same time, the brain can better distinguish between similar sounds. © 2020 Scientific American,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27074 - Posted: 02.26.2020

By Kim Tingley Hearing loss has long been considered a normal, and thus acceptable, part of aging. It is common: Estimates suggest that it affects two out of three adults age 70 and older. It is also rarely treated. In the U.S., only about 14 percent of adults who have hearing loss wear hearing aids. An emerging body of research, however, suggests that diminished hearing may be a significant risk factor for Alzheimer’s disease and other forms of dementia — and that the association between hearing loss and cognitive decline potentially begins at very low levels of impairment. In November, a study published in the journal JAMA Otolaryngology — Head and Neck Surgery examined data on hearing and cognitive performance from more than 6,400 people 50 and older. Traditionally, doctors diagnose impairment when someone experiences a loss in hearing of at least 25 decibels, a somewhat arbitrary threshold. But for the JAMA study, researchers included hearing loss down to around zero decibels in their analysis and found that they still predicted correspondingly lower scores on cognitive tests. “It seemed like the relationship starts the moment you have imperfect hearing,” says Justin Golub, the study’s lead author and an ear, nose and throat doctor at the Columbia University Medical Center and NewYork-Presbyterian. Now, he says, the question is: Does hearing loss actually cause the cognitive problems it has been associated with and if so, how? Preliminary evidence linking dementia and hearing loss was published in 1989 by doctors at the University of Washington, Seattle, who compared 100 patients with Alzheimer’s-like dementia with 100 demographically similar people without it and found that those who had dementia were more likely to have hearing loss, and that the extent of that loss seemed to correspond with the degree of cognitive impairment. But that possible connection wasn’t rigorously investigated until 2011, when Frank Lin, an ear, nose and throat doctor at Johns Hopkins School of Medicine, and colleagues published the results of a longitudinal study that tested the hearing of 639 older adults who were dementia-free and then tracked them for an average of nearly 12 years, during which time 58 had developed Alzheimer’s or another cognitive impairment. They discovered that a subject’s likelihood of developing dementia increased in direct proportion to the severity of his or her hearing loss at the time of the initial test. The relationship seems to be “very, very linear,” Lin says, meaning that the greater the hearing deficit, the greater the risk a person will develop the condition. © 2020 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 27057 - Posted: 02.20.2020

By Jane E. Brody Every now and then I write a column as much to push myself to act as to inform and motivate my readers. What follows is a prime example. Last year in a column entitled “Hearing Loss Threatens Mind, Life and Limb,” I summarized the current state of knowledge about the myriad health-damaging effects linked to untreated hearing loss, a problem that afflicts nearly 38 million Americans and, according to two huge recent studies, increases the risk of dementia, depression, falls and even cardiovascular diseases. Knowing that my own hearing leaves something to be desired, the research I did for that column motivated me to get a proper audiology exam. The results indicated that a well-fitted hearing aid could help me hear significantly better in the movies, theater, restaurants, social gatherings, lecture halls, even in the locker room where the noise of hair dryers, hand dryers and swimsuit wringers often challenges my ability to converse with my soft-spoken friends. That was six months ago, and I’ve yet to go back to get that recommended hearing aid. Now, though, I have a new source of motivation. A large study has documented that even among people with so-called normal hearing, those with only slightly poorer hearing than perfect can experience cognitive deficits. That means a diminished ability to get top scores on standardized tests of brain function, like matching numbers with symbols within a specified time period. But while you may never need or want to do that, you most likely do want to maximize and maintain cognitive function: your ability to think clearly, plan rationally and remember accurately, especially as you get older. While under normal circumstances, cognitive losses occur gradually as people age, the wisest course may well be to minimize and delay them as long as possible and in doing so, reduce the risk of dementia. Hearing loss is now known to be the largest modifiable risk factor for developing dementia, exceeding that of smoking, high blood pressure, lack of exercise and social isolation, according to an international analysis published in The Lancet in 2017. © 2019 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26923 - Posted: 12.30.2019

By Carolyn Gramling Exceptionally preserved skulls of a mammal that lived alongside the dinosaurs may be offering scientists a glimpse into the evolution of the middle ear. The separation of the three tiny middle ear bones — known popularly as the hammer, anvil and stirrup — from the jaw is a defining characteristic of mammals. The evolutionary shift of those tiny bones, which started out as joints in ancient reptilian jaws and ultimately split from the jaw completely, gave mammals greater sensitivity to sound, particularly at higher frequencies (SN: 3/20/07). But finding well-preserved skulls from ancient mammals that can help reveal the timing of this separation is a challenge. Now, scientists have six specimens — four nearly complete skeletons and two fragmented specimens — of a newly described, shrew-sized critter dubbed Origolestes lii that lived about 123 million years ago. O. lii was part of the Jehol Biota, an ecosystem of ancient wetlands-dwellers that thrived between 133 million and 120 million years ago in what’s now northeastern China. The skulls on the nearly complete skeletons were so well-preserved that they were able to be examined in 3-D, say paleontologist Fangyuan Mao of the Chinese Academy of Sciences in Beijing and colleagues. That analysis suggests that O. lii’s middle ear bones were fully separated from its jaw, the team reports online December 5 in Science. Fossils from an older, extinct line of mammals have shown separated middle ear bones, but this newfound species would be the first of a more recent lineage to exhibit this evolutionary advance. © Society for Science & the Public 2000–2019

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26880 - Posted: 12.07.2019

By Jade Wu What do the sounds of whispering, crinkling paper, and tapping fingernails have in common? What about the sight of soft paint brushes on skin, soap being gently cut to pieces, and hand movements like turning the pages of a book? Well, if you are someone who experiences the autonomous sensory meridian response—or ASMR, for short—you may recognize these seemingly ordinary sounds and sights as “triggers” for the ASMR experience. No idea what I’m talking about? Don’t worry, you’re actually in the majority. Most people, myself included, aren’t affected by these triggers. But what happens to those who are? What is the ASMR experience? It’s described as a pleasantly warm and tingling sensation that starts on the scalp and moves down the neck and spine. ASMR burst onto the Internet scene in 2007, according to Wikipedia, when a woman with the username “okaywhatever” described her experience of ASMR sensations in an online health discussion forum. At the time, there was no name for this weird phenomenon. But by 2010, someone called Jennifer Allen had named the experience, and from there, ASMR became an Internet sensation. Today, there are hundreds of ASMR YouTubers who collectively post over 200 videos of ASMR triggers per day, as reported by a New York Times article in April, 2019. Some ASMR YouTubers have become bona fide celebrities with ballooning bank accounts, millions of fans, and enough fame to be stopped on the street for selfies. There’s been some controversy. Some people doubt whether this ASMR experience is “real,” or just the result of recreational drugs or imagined sensations. Some have chalked the phenomenon up to a symptom of loneliness among Generation Z, who get their dose of intimacy from watching strangers pretend to do their makeup without having to interact with real people. Some people are even actively put off by ASMR triggers. One of my listeners, Katie, said that most ASMR videos just make her feel agitated. But another listener, Candace, shared that she has been unknowingly chasing ASMR since she was a child watching BBC. © 2019 Scientific American

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26873 - Posted: 12.05.2019