Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 670

Hannah Devlin Science correspondent A mind-controlled hearing aid that allows the wearer to focus on particular voices has been created by scientists, who say it could transform the ability of those with hearing impairments to cope with noisy environments. The device mimics the brain’s natural ability to single out and amplify one voice against background conversation. Until now, even the most advanced hearing aids work by boosting all voices at once, which can be experienced as a cacophony of sound for the wearer, especially in crowded environments. Nima Mesgarani, who led the latest advance at Columbia University in New York, said: “The brain area that processes sound is extraordinarily sensitive and powerful. It can amplify one voice over others, seemingly effortlessly, while today’s hearing aids still pale in comparison.” This can severely hinder a wearer’s ability to join in conversations, making busy social occasions particularly challenging. Scientists have been working for years to resolve this problem, known as the cocktail party effect. The brain-controlled hearing aid appears to have cracked the problem using a combination of artificial intelligence and sensors designed to monitor the listener’s brain activity. The hearing aid first uses an algorithm to automatically separate the voices of multiple speakers. It then compares these audio tracks to the brain activity of the listener. Previous work by Mesgarani’s lab found that it is possible to identify which person someone is paying attention to, as their brain activity tracks the sound waves of that voice most closely. © 2019 Guardian News & Media Limited

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26247 - Posted: 05.18.2019

By Maggie Koerth-Baker Where is the loudest place in America? You might think New York City, or a major airport hub, or a concert you have suddenly become too old to appreciate. But that depends on what kind of noise you’re measuring. Sound is actually a physical thing. What we perceive as noise is the result of air molecules bumping into one another, like a Newton’s cradle toy. That movement eventually reaches our eardrums, which turn that tiny wiggle into an audible signal. But human ears can’t convert all molecular motion to sound. Sometimes the particles are jostling one another too fast. Sometimes they’re too slow. Sometimes, the motion is just happening in the wrong medium — through the Earth, say, instead of through the air. And when you start listening for the sounds we can’t hear, the loudest place in America can end up being right under your feet. Scientists have tools that can detect these “silent” waves, and they’ve found a lot of noise happening all over the U.S. Those noises are made by the cracking of rocks deep in the Earth along natural fault lines and the splashing of whitecaps on the ocean. But they’re also made by our factories, power plants, mines and military. “Any kind of mechanical process is going to generate energetic waves, said Omar Marcillo, staff scientist at Los Alamos National Laboratory. “Some of that goes through the atmosphere as acoustic waves, and some goes through the ground as seismic waves.” Marcillo’s work focuses on the seismic. © 2019 ABC News Internet Ventures.

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26227 - Posted: 05.11.2019

/ By Jed Gottlieb In 1983, The New York Times published a bombshell report about President Ronald Reagan: Starkey Laboratories had fitted the President, then 72, with a hearing aid. The news was welcomed by health professionals who reckoned it could help to reduce the stigma associated with hearing loss. At the time, one in three people over the age of 60 was thought to have hearing problems, though only around 20 percent who needed hearing aids used them. “The way I do the math, a third of all adults have unaddressed hearing issues. That’s lot of people.” Indeed, Reagan’s handlers knew too well that the revelation risked making the president look like a feeble old man — and worse, someone ill-equipped to run the most powerful nation on earth. “Among Presidential advisers,” The New York Times noted, “Mr. Reagan’s use of a hearing aid revived speculation on whether his age would be an issue if he seeks re-election next year.” Reagan won re-election, of course, but nearly 40 years later, negative perceptions persist — and health advocates are more concerned than ever. Hearing loss, they say, is not just a functional disability affecting a subset of aging adults. With population growth and a boom in the global elderly population, the World Health Organization (WHO) now estimates that by 2050, more than 900 million people will have disabling hearing loss. A 2018 study of 3,316 children aged nine to 11 meanwhile, found that 14 percent already had signs of hearing loss themselves. While not conclusive, the study linked the loss to the rise of portable music players. Copyright 2019 Undark

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 26124 - Posted: 04.09.2019

Emily Conover Lasers can send sounds straight to a listener’s ear, like whispering a secret from afar. Using a laser tuned to interact with water vapor in the air, scientists created sounds in a localized spot that were loud enough to be picked up by human hearing if aimed near a listener’s ear. It’s the first time such a technique can be used safely around humans, scientists from MIT Lincoln Laboratory in Lexington, Mass., report in the Feb. 1 Optics Letters. At the wavelengths and intensities used, the laser won’t cause burns if it grazes eyes or skin. The scientists tested out the setup on themselves in the laboratory, putting their ears near the beam to pick up the sound. “You move your head around, and there’s a couple-inch zone where you go ‘Oh, there it is!’… It’s pretty cool,” says physicist Charles Wynn. The researchers also used microphones to capture and analyze the sounds. The work relies on a phenomenon called the photoacoustic effect, in which pulses of light are converted into sound when absorbed by a material, in this case, water vapor. Based on this effect, the researchers used two different techniques to make the sounds. The first technique, which involves rapidly ramping the intensity of the laser beam up and down, can transmit voices and songs. “You can hear the music really well; you can understand what people are saying,” says physicist Ryan Sullenberger, who coauthored the study along with Wynn and physicist Sumanth Kaushik. |© Society for Science & the Public 2000 - 2019.

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25925 - Posted: 02.02.2019

By Jane E. Brody The earsplitting sound of ambulance sirens in New York City is surely hastening the day when I and many others repeatedly subjected to such noise will be forced to get hearing aids. I just hope this doesn’t happen before 2021 or so when these devices become available over-the-counter and are far less expensive and perhaps more effective than they are now. Currently, hearing aids and accompanying services are not covered by medical insurance, Medicare included. Such coverage was specifically excluded when the Medicare law was passed in 1965, a time when hearing loss was not generally recognized as a medical issue and hearing aids were not very effective, said Dr. Frank R. Lin, who heads the Cochlear Center for Hearing and Public Health at the Johns Hopkins Bloomberg School of Public Health. Now a growing body of research by his colleagues and others is linking untreated hearing loss to several costly ills, and the time has come for hearing protection and treatment of hearing loss to be taken much more seriously. Not only is poor hearing annoying and inconvenient for millions of people, especially the elderly. It is also an unmistakable health hazard, threatening mind, life and limb, that could cost Medicare much more than it would to provide hearing aids and services for every older American with hearing loss. Currently, 38.2 million Americans aged 12 or older have hearing loss, a problem that becomes increasingly common and more severe with age. More than half of people in their 70s and more than 80 percent in their 80s have mild to moderate hearing loss or worse, according to tests done by the National Health and Nutrition Examination Survey between 2001 and 2010. Two huge new studies have demonstrated a clear association between untreated hearing loss and an increased risk of dementia, depression, falls and even cardiovascular diseases. In a significant number of people, the studies indicate, uncorrected hearing loss itself appears to be the cause of the associated health problem. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 25834 - Posted: 01.01.2019

By: Robert Zatorre, Ph.D. Human beings seem to have innate musicality. That is, the capacity to understand and derive pleasure from complex musical patterns appears to be culturally universal.1 Musicality is expressed very early in development.2 In this sense, music may be compared to speech—the other cognitively interesting way that we use sound. But whereas speech is most obviously important for communicating propositions or concepts, obtaining such knowledge, this is not the primary function of music. Rather, it is music’s power to communicate emotions, moods, or affective mental states that seems beneficial to our quality of life. Which brings us to the question that forms the title of this article: why do we love music? On its face, there is no apparent reason why a sequence or pattern of sounds that has no specific propositional meaning should elicit any kind of pleasurable response. Yet music is widely considered amongst our greatest joys.3 Where does this phenomenon come from? There are several approaches to this question. A musicologist might have a very different answer than a social scientist. Since I’m a neuroscientist, I would like to address it from that perspective—recognizing that other perspectives may also offer valuable insights. An advantage of neuroscience is that we can relate our answer to established empirical findings and draw from two especially relevant domains: the neuroscience of auditory perception and of the reward system. To give away the punch line of my article, I believe that music derives its power from an interaction between these two systems, the first of which allows us to analyze sound patterns and make predictions about them, and the second of which evaluates the outcomes of these predictions and generates positive (or negative) emotions depending on whether the expectation was met, not met, or exceeded. © 2018 The Dana Foundation

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 25832 - Posted: 01.01.2019

Jennifer Leman Some moths aren’t so easy for bats to detect. The cabbage tree emperor moth has wings with tiny scales that absorb sound waves sent out by bats searching for food. That absorption reduces the echoes that bounce back to bats, allowing Bunaea alcinoe to avoid being so noticeable to the nocturnal predators, researchers report online November 12 in the Proceedings of the National Academy of Sciences. “They have this stealth coating on their body surfaces which absorbs the sound,” says study coauthor Marc Holderied, a bioacoustician at the University of Bristol in England. “We now understand the mechanism behind it.” Bats sense their surroundings using echolocation, sending out sound waves that bounce off objects and return as echoes picked up by the bats’ supersensitive ears (SN: 9/30/17, p. 22). These moths, without ears that might alert them to an approaching predator, have instead developed scales of a size, shape and thickness suited to absorbing ultrasonic sound frequencies used by bats, the researchers found. The team shot ultrasonic sound waves at a single, microscopic scale and observed it transferring sound wave energy into movement. The scientists then simulated the process with a 3-D computer model that showed the scale absorbing up to 50 percent of the energy from sound waves. What’s more, it isn’t just wings that help such earless moths evade bats. Other moths in the same family as B. alcinoe also have sound-absorbing fur, the same researchers report online October 18 in the Journal of the Acoustical Society of America. |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25679 - Posted: 11.14.2018

By Jane E. Brody Jane R. Madell, a pediatric audiology consultant and speech-language pathologist in Brooklyn, N.Y., wants every parent with a child who is born hearing-impaired to know that it is now possible for nearly all children with severe hearing loss to learn to listen and speak as if their hearing were completely normal. “Children identified with hearing loss at birth and fitted with technology in the first weeks of life blend in so well with everyone else that people don’t realize there are so many deaf children,” she told me. With the appropriate hearing device and auditory training for children and their caregivers during the preschool years, even those born deaf “will have the ability to learn with their peers when they start school,” Dr. Madell said. “Eighty-five percent of such children are successfully mainstreamed. Parents need to know that listening and spoken language is a possibility for their children.” Determined to get this message out to all who learn their children lack normal hearing, Dr. Madell and Irene Taylor Brodsky produced a documentary, “The Listening Project,” to demonstrate the enormous help available through modern hearing assists and auditory training. Among the “stars” in the film, all of whom grew up deaf or severely hearing-impaired, are Dr. Elizabeth Bonagura, an obstetrician-gynecologist and surgeon; Jake Spinowitz, a musician; Joanna Lippert, a medical social worker, and Amy Pollick, a psychologist. All started out with hearing aids that helped them learn to speak and understand spoken language. But now all have cochlear implants that, as Ms. Lippert put it, “really revolutionized my world” when, at age 11, she became the first preteen to get a cochlear implant at New York University Medical Center. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25541 - Posted: 10.08.2018

Craig Richard Have you ever stumbled upon an hourlong online video of someone folding napkins? Or maybe crinkling paper, sorting a thimble collection or pretending to give the viewer an ear exam? They’re called ASMR videos and millions of people love them and consider watching them a fantastic way to relax. Other viewers count them among the strangest things on the internet. So are they relaxing or strange? I think they are both, which is why I have been fascinated with trying to understand ASMR for the past five years. In researching my new book “Brain Tingles,” I explored the many mysteries about ASMR as well as best practices for incorporating ASMR into various aspects of life, like parenting, spas and health studios. ASMR is short for Autonomous Sensory Meridian Response. Enthusiast Jennifer Allen coined the term in 2010. You may also hear this phenomenon called “head orgasms” or “brain tingles.” It’s distinct from the “aesthetic chills” or frisson some people experience when listening to music, for instance. People watch ASMR videos in hopes of eliciting the response, usually experienced as a deeply relaxing sensation with pleasurable tingles in the head. It can feel like the best massage in the world – but without anyone touching you. Imagine watching an online video while your brain turns into a puddle of bliss. The actions and sounds in ASMR videos mostly recreate moments in real life that people have discovered spark the feeling. These stimuli are called ASMR triggers. They usually involve receiving personal attention from a caring person. Associated sounds are typically gentle and non-threatening. © 2010–2018, The Conversation US, Inc.

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 25498 - Posted: 09.27.2018

By James Gorman It’s not easy to help ducks. Ask Kate McGrew, a masters student in wildlife ecology at the University of Delaware. Over two seasons, 2016 and 2017, she spent months raising and working with more than two dozen hatchlings from three different species, all to determine what they hear underwater. This was no frivolous inquiry. Sea ducks, like the ones she trained, dive to catch their prey in oceans around the world and are often caught unintentionally in fish nets and killed. Christopher Williams, a professor at the university who is Ms. McGrew’s adviser, said one estimate puts the number of ducks killed at sea at 400,000 a year, although he said the numbers are hard to pin down. A similar problem plagues marine mammals, like whales, and acoustic devices have been developed to send out pings that warn them away from danger. A similar tactic might work with diving ducks, but first, as Dr. Williams said, it would make sense to answer a question that science hasn’t even asked about diving ducks: “What do they hear?” “There actually is little to no research done on duck hearing in general,” Ms. McGrew said, “and on the underwater aspect of it, there’s even less.” That’s the recipe for a perfect, although demanding research project. Her goal was to use three common species of sea ducks to study a good range of underwater hearing ability. But while you can lead a duck to water and it will paddle around naturally, teaching it to take a hearing test is another matter entirely. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25389 - Posted: 08.28.2018

Abby Olena Scientists have been looking for years for the proteins that convert the mechanical movement of inner ears’ hair cells into an electrical signal that the brain interprets as sound. In a study published today (August 22) in Neuron, researchers have confirmed that transmembrane channel-like protein 1 (TMC1) contributes to the pore of the so-called mechanotransduction channel in the cells’ membrane. “The identification of the channel has been missing for a long time,” says Anthony Peng, a neuroscientist at the University of Colorado Denver who did not participate in the study. This work “settles the debate as to whether or not [TMC1] is a pore-lining component of the mechanotransduction channel.” When a sound wave enters the cochlea, it wiggles protrusions called stereocilia on both outer hair cells, which amplify the signals, and inner hair cells, which convert the mechanical signals to electric ones and send them to the brain. It’s been tricky to figure out what protein the inner hair cells use for this conversion, because their delicate environment is difficult to recreate in vitro in order to test candidate channel proteins. In 2000, researchers reported on a promising candidate in flies, but it turned out not to be conserved in mammals. In a study published in 2011, Jeffrey Holt of Harvard Medical School and Boston Children’s Hospital and colleagues showed that genes for TMC proteins were necessary for mechanotransduction in mice. This evidence—combined with earlier work from another group showing that mutations in these genes could cause deafness in humans—pointed to the idea that TMC1 formed the ion channel in inner ear hair cells. © 1986 - 2018 The Scientist

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25372 - Posted: 08.24.2018

By Matthew Hutson For millions who can’t hear, lip reading offers a window into conversations that would be lost without it. But the practice is hard—and the results are often inaccurate (as you can see in these Bad Lip Reading videos). Now, researchers are reporting a new artificial intelligence (AI) program that outperformed professional lip readers and the best AI to date, with just half the error rate of the previous best algorithm. If perfected and integrated into smart devices, the approach could put lip reading in the palm of everyone’s hands. “It’s a fantastic piece of work,” says Helen Bear, a computer scientist at Queen Mary University of London who was not involved with the project. Writing computer code that can read lips is maddeningly difficult. So in the new study scientists turned to a form of AI called machine learning, in which computers learn from data. They fed their system thousands of hours of videos along with transcripts, and had the computer solve the task for itself. The researchers started with 140,000 hours of YouTube videos of people talking in diverse situations. Then, they designed a program that created clips a few seconds long with the mouth movement for each phoneme, or word sound, annotated. The program filtered out non-English speech, nonspeaking faces, low-quality video, and video that wasn’t shot straight ahead. Then, they cropped the videos around the mouth. That yielded nearly 4000 hours of footage, including more than 127,000 English words. © 2018 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 25280 - Posted: 08.01.2018

by Juliet Corwin On the deafness scale of mild, moderate, severe or profound, I am profoundly deaf. With the help of cochlear implants, I am able to “hear” and speak. The devices are complicated to explain, but basically, external sound processors, worn behind the ears, send a digital signal to the implants, which convert the signal to electric impulses that stimulate the hearing nerve and provide sound signals to the brain. The implants allow me to attend my middle school classes with few accommodations, but I’m still quite different from people who hear naturally. When my implant processors are turned off, I don’t hear anything. I regard myself as a deaf person, and I am proud to be among those who live with deafness, yet I often feel rejected by some of these same people. My use of cochlear implants and lack of reliance on American Sign Language (I use it but am not fluent — I primarily speak) are treated like a betrayal by many in the Deaf — capital-D — community. In the view of many who embrace Deaf culture, a movement that began in the 1970s, those who are integrated into the hearing world through technology, such as hearing aids or cochlear implants, myself included, are regarded as “not Deaf enough” to be a part of the community. People deaf from birth or through illness or injury already face discrimination. I wish we didn’t practice exclusion among ourselves. But it happens, and it’s destructive. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25247 - Posted: 07.25.2018

Alison Abbott On a sun-parched patch of land in Rehovot, Israel, two neuroscientists peer into the darkness of a 200-metre-long tunnel of their own design. The fabric panels of the snaking structure shimmer in the heat, while, inside, a study subject is navigating its dim length. Finally, out of the blackness bursts a bat, which executes a mid-air backflip to land upside down, hanging at the tunnel’s entrance. The vast majority of experiments probing navigation in the brain have been done in the confines of labs, using earthbound rats and mice. Ulanovsky broke with the convention. He constructed the flight tunnel on a disused plot on the grounds of the Weizmann Institute of Science — the first of several planned arenas — because he wanted to find out how a mammalian brain navigates a more natural environment. In particular, he wanted to know how brains deal with a third dimension. The tunnel, which Ulanovsky built in 2016, has already proved its scientific value. So have the bats. They have helped Ulanovsky to discover new aspects of the complex encoding of navigation — a fundamental brain function essential for survival. He has found a new cell type responsible for the bats’ 3D compass, and other cells that keep track of where other bats are in the environment. It is a hot area of study — navigation researchers won the 2014 Nobel Prize in Physiology or Medicine and the field is an increasingly prominent fixture at every big neuroscience conference. “Nachum’s boldness is impressive,” says Edvard Moser of the Kavli Institute for Systems Neuroscience in Trondheim, Norway, one of the 2014 Nobel laureates. “And it’s paid off — his approach is allowing important new questions to be addressed.” . © 2018 Springer Nature Limited.

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25198 - Posted: 07.12.2018

By Elizabeth Pennisi Bats and their prey are in a constant arms race. Whereas the winged mammals home in on insects with frighteningly accurate sonar, some of their prey—such as the tiger moth—fight back with sonar clicks and even jamming signals. Now, in a series of bat-moth skirmishes (above), scientists have shown how other moths create an “acoustic illusion,” with long wing-tails that fool bats into striking the wrong place. The finding helps explain why some moths have such showy tails, and it may also provide inspiration for drones of the future. Moth tails vary from species to species: Some have big lobes at the bottom of the hindwing instead of a distinctive tail; others have just a short protrusion. Still others have long tails that are thin strands with twisted cuplike ends. In 2015, sensory ecologist Jesse Barber of Boise State University in Idaho and colleagues discovered that some silk moths use their tails to confuse bat predators. Now, graduate student Juliette Rubin has shown just what makes the tails such effective deterrents. Working with three species of silk moths—luna, African moon, and polyphemus—Rubin shortened or cut off some of their hindwings and glued longer or differently shaped tails to others. She then tied the moths to a string hanging from the top of a large cage and released a big brown bat (Eptesicus fuscus) inside. She used high-speed cameras and microphones to record the ensuing fight. © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25173 - Posted: 07.05.2018

A small-molecule drug is one of the first to preserve hearing in a mouse model of an inherited form of progressive human deafness, report investigators at the University of Iowa, Iowa City, and the National Institutes of Health’s National Institute on Deafness and Other Communication Disorders (NIDCD). The study, which appears online in Cell (link is external), sheds light on the molecular mechanism that underlies a form of deafness (DFNA27), and suggests a new treatment strategy. “We were able to partially restore hearing, especially at lower frequencies, and save some sensory hair cells,” said Thomas B. Friedman, Ph.D., chief of the Laboratory of Human Molecular Genetics at the NIDCD, and a coauthor of the study. “If additional studies show that small-molecule-based drugs are effective in treating DFNA27 deafness in people, it’s possible that using similar approaches might work for other inherited forms of progressive hearing loss.” The seed for the advance was planted a decade ago, when NIDCD researchers led by Friedman and Robert J. Morell, Ph.D., another coauthor of the current study, analyzed the genomes of members of an extended family, dubbed LMG2. Deafness is genetically dominant in the LMG2 family, meaning that a child needs to inherit only one copy of the defective gene from a parent to have progressive hearing loss. The investigators localized the deafness-causing mutation to a region on chromosome four called DFNA27, which includes a dozen or so genes. The precise location of the mutation eluded the NIDCD team, however.

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25160 - Posted: 06.29.2018

By David Noonan Neuroscientist James Hudspeth has basically been living inside the human ear for close to 50 years. In that time Hudspeth, head of the Laboratory of Sensory Neuroscience at The Rockefeller University, has dramatically advanced scientists’ understanding of how the ear and brain work together to process sound. Last week his decades of groundbreaking research were recognized by the Norwegian Academy of Science, which awarded him the million-dollar Kavli Prize in Neuroscience. Hudspeth shared the prize with two other hearing researchers: Robert Fettiplace from the University of Wisconsin–Madison and Christine Petit from the Pasteur Institute in Paris. Advertisement As Hudspeth explored the neural mechanisms of hearing over the years, he developed a special appreciation for the intricate anatomy of the inner ear—an appreciation that transcends the laboratory. “I think we as scientists tend to underemphasize the aesthetic aspect of science,” he says. “Yes, science is the disinterested investigation into the nature of things. But it is more like art than not. It’s something that one does for the beauty of it, and in the hope of understanding what has heretofore been hidden. Here’s something incredibly beautiful, like the inner ear, performing a really remarkable function. How can that be? How does it do it?” After learning of his Kavli Prize on Thursday, Hudspeth spoke with Scientific American about his work and how the brain transforms physical vibration into the experience of a symphony. © 2018 Scientific American

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25055 - Posted: 06.04.2018

By Abby Olena Activating or suppressing neuronal activity with ultrasound has shown promise both in the lab and the clinic, based on the ability to focus noninvasive, high-frequency sound waves on specific brain areas. But in mice and guinea pigs, it appears that the technique has effects that scientists didn’t expect. In two studies published today (May 24) in Neuron, researchers demonstrate that ultrasound activates the brains of rodents by stimulating an auditory response—not, as researchers had presumed, only the specific neurons where the ultrasound is focused. “These papers are a very good warning to folks who are trying to use ultrasound as a tool to manipulate brain activity,” says Raag Airan, a neuroradiologist and researcher at Stanford University Medical Center who did not participate in either study, but coauthored an accompanying commentary. “In doing these experiments going forward [the hearing component] is something that every single experimenter is going to have to think about and control,” he adds. Over the past decade, researchers have used ultrasound to elicit electrical responses from cells in culture and motor and sensory responses from the brains of rodents and primates. Clinicians have also used so-called ultrasonic neuromodulation to treat movement disorders. But the mechanism by which high frequency sound waves work to exert their influence is not well understood. © 1986-2018 The Scientist

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25025 - Posted: 05.26.2018

By Maya Salam Three years ago, the internet melted down over the color of a dress. Now an audio file has friends, family members and office mates questioning one another’s hearing, and their own. Is the robot voice saying “Yanny” or “Laurel”? The clip picked up steam after a debate erupted on Reddit this week, and it has since been circulated widely on social media. One Reddit user said: “I hear Laurel and everyone is a liar.” “They are saying they hear ‘Yanny’ because they want attention,” a tweet read. Others claimed they heard one word for a while, then the other — or even both simultaneously. It didn’t take long for the auditory illusion to be referred to as “black magic.” And more than one person online yearned for that simpler time in 2015, when no one could decide whether the mother of the bride wore white and gold or blue and black. It was a social media frenzy in which internet trends and traffic on the topic spiked so high that Wikipedia itself now has a simple entry, “The dress.” Of course, in the grand tradition of internet reportage, we turned to a scientist to make this article legitimately newsworthy. Dr. Jody Kreiman, a principal investigator at the voice perception laboratory at the University of California, Los Angeles, helpfully guessed on Tuesday afternoon that “the acoustic patterns for the utterance are midway between those for the two words.” “The energy concentrations for Ya are similar to those for La,” she said. “N is similar to r; I is close to l.” She cautioned, though, that more analysis would be required to sort out the discrepancy. That did not stop online sleuths from trying to find the answer by manipulating the bass, pitch or volume. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Consciousness
Link ID: 24983 - Posted: 05.16.2018

By Roni Dengler Hoary bats are habitual squawkers. Sporting frosted brown fur á la Guy Fieri, the water balloon–size bats bark high-pitched yips to navigate the dark night sky by echolocation. But a new study reveals that as they fly, those cries often drop to a whisper, or even silence, suggesting the bats may steer themselves through the darkness with some of the quietest sonar on record. To find out how hoary bats navigate, researchers used infrared cameras and ultrasonic microphones to record scores of them flying through a riverside corridor in California on five autumn nights. In about half of the nearly 80 flights, scientists captured a novel type of call. Shorter, faster, and quieter than their usual calls, the new “micro” calls use three orders of magnitude less sound energy than other bats’ yaps did, the researchers report today in the Proceedings of the Royal Society B. As bats approached objects, they would often quickly increase the volume of their calls. But in close to half the flights, researchers did not pick up any calls at all. This stealth flying mode may explain one sad fact of hoary bat life: They suffer more fatal run-ins with wind turbines than other bat species in North America. The microcalls are so quiet that they reduce the distance over which bats can detect large and small objects by more than three times. That also cuts bats’ reaction time by two-thirds, making them too slow to catch their insect prey. © 2018 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24928 - Posted: 05.02.2018