Links for Keyword: Hearing

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 722

Craig Richard Have you ever stumbled upon an hourlong online video of someone folding napkins? Or maybe crinkling paper, sorting a thimble collection or pretending to give the viewer an ear exam? They’re called ASMR videos and millions of people love them and consider watching them a fantastic way to relax. Other viewers count them among the strangest things on the internet. So are they relaxing or strange? I think they are both, which is why I have been fascinated with trying to understand ASMR for the past five years. In researching my new book “Brain Tingles,” I explored the many mysteries about ASMR as well as best practices for incorporating ASMR into various aspects of life, like parenting, spas and health studios. ASMR is short for Autonomous Sensory Meridian Response. Enthusiast Jennifer Allen coined the term in 2010. You may also hear this phenomenon called “head orgasms” or “brain tingles.” It’s distinct from the “aesthetic chills” or frisson some people experience when listening to music, for instance. People watch ASMR videos in hopes of eliciting the response, usually experienced as a deeply relaxing sensation with pleasurable tingles in the head. It can feel like the best massage in the world – but without anyone touching you. Imagine watching an online video while your brain turns into a puddle of bliss. The actions and sounds in ASMR videos mostly recreate moments in real life that people have discovered spark the feeling. These stimuli are called ASMR triggers. They usually involve receiving personal attention from a caring person. Associated sounds are typically gentle and non-threatening. © 2010–2018, The Conversation US, Inc.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 25498 - Posted: 09.27.2018

By James Gorman It’s not easy to help ducks. Ask Kate McGrew, a masters student in wildlife ecology at the University of Delaware. Over two seasons, 2016 and 2017, she spent months raising and working with more than two dozen hatchlings from three different species, all to determine what they hear underwater. This was no frivolous inquiry. Sea ducks, like the ones she trained, dive to catch their prey in oceans around the world and are often caught unintentionally in fish nets and killed. Christopher Williams, a professor at the university who is Ms. McGrew’s adviser, said one estimate puts the number of ducks killed at sea at 400,000 a year, although he said the numbers are hard to pin down. A similar problem plagues marine mammals, like whales, and acoustic devices have been developed to send out pings that warn them away from danger. A similar tactic might work with diving ducks, but first, as Dr. Williams said, it would make sense to answer a question that science hasn’t even asked about diving ducks: “What do they hear?” “There actually is little to no research done on duck hearing in general,” Ms. McGrew said, “and on the underwater aspect of it, there’s even less.” That’s the recipe for a perfect, although demanding research project. Her goal was to use three common species of sea ducks to study a good range of underwater hearing ability. But while you can lead a duck to water and it will paddle around naturally, teaching it to take a hearing test is another matter entirely. © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25389 - Posted: 08.28.2018

Abby Olena Scientists have been looking for years for the proteins that convert the mechanical movement of inner ears’ hair cells into an electrical signal that the brain interprets as sound. In a study published today (August 22) in Neuron, researchers have confirmed that transmembrane channel-like protein 1 (TMC1) contributes to the pore of the so-called mechanotransduction channel in the cells’ membrane. “The identification of the channel has been missing for a long time,” says Anthony Peng, a neuroscientist at the University of Colorado Denver who did not participate in the study. This work “settles the debate as to whether or not [TMC1] is a pore-lining component of the mechanotransduction channel.” When a sound wave enters the cochlea, it wiggles protrusions called stereocilia on both outer hair cells, which amplify the signals, and inner hair cells, which convert the mechanical signals to electric ones and send them to the brain. It’s been tricky to figure out what protein the inner hair cells use for this conversion, because their delicate environment is difficult to recreate in vitro in order to test candidate channel proteins. In 2000, researchers reported on a promising candidate in flies, but it turned out not to be conserved in mammals. In a study published in 2011, Jeffrey Holt of Harvard Medical School and Boston Children’s Hospital and colleagues showed that genes for TMC proteins were necessary for mechanotransduction in mice. This evidence—combined with earlier work from another group showing that mutations in these genes could cause deafness in humans—pointed to the idea that TMC1 formed the ion channel in inner ear hair cells. © 1986 - 2018 The Scientist

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25372 - Posted: 08.24.2018

By Matthew Hutson For millions who can’t hear, lip reading offers a window into conversations that would be lost without it. But the practice is hard—and the results are often inaccurate (as you can see in these Bad Lip Reading videos). Now, researchers are reporting a new artificial intelligence (AI) program that outperformed professional lip readers and the best AI to date, with just half the error rate of the previous best algorithm. If perfected and integrated into smart devices, the approach could put lip reading in the palm of everyone’s hands. “It’s a fantastic piece of work,” says Helen Bear, a computer scientist at Queen Mary University of London who was not involved with the project. Writing computer code that can read lips is maddeningly difficult. So in the new study scientists turned to a form of AI called machine learning, in which computers learn from data. They fed their system thousands of hours of videos along with transcripts, and had the computer solve the task for itself. The researchers started with 140,000 hours of YouTube videos of people talking in diverse situations. Then, they designed a program that created clips a few seconds long with the mouth movement for each phoneme, or word sound, annotated. The program filtered out non-English speech, nonspeaking faces, low-quality video, and video that wasn’t shot straight ahead. Then, they cropped the videos around the mouth. That yielded nearly 4000 hours of footage, including more than 127,000 English words. © 2018 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 25280 - Posted: 08.01.2018

by Juliet Corwin On the deafness scale of mild, moderate, severe or profound, I am profoundly deaf. With the help of cochlear implants, I am able to “hear” and speak. The devices are complicated to explain, but basically, external sound processors, worn behind the ears, send a digital signal to the implants, which convert the signal to electric impulses that stimulate the hearing nerve and provide sound signals to the brain. The implants allow me to attend my middle school classes with few accommodations, but I’m still quite different from people who hear naturally. When my implant processors are turned off, I don’t hear anything. I regard myself as a deaf person, and I am proud to be among those who live with deafness, yet I often feel rejected by some of these same people. My use of cochlear implants and lack of reliance on American Sign Language (I use it but am not fluent — I primarily speak) are treated like a betrayal by many in the Deaf — capital-D — community. In the view of many who embrace Deaf culture, a movement that began in the 1970s, those who are integrated into the hearing world through technology, such as hearing aids or cochlear implants, myself included, are regarded as “not Deaf enough” to be a part of the community. People deaf from birth or through illness or injury already face discrimination. I wish we didn’t practice exclusion among ourselves. But it happens, and it’s destructive. © 1996-2018 The Washington Post

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25247 - Posted: 07.25.2018

Alison Abbott On a sun-parched patch of land in Rehovot, Israel, two neuroscientists peer into the darkness of a 200-metre-long tunnel of their own design. The fabric panels of the snaking structure shimmer in the heat, while, inside, a study subject is navigating its dim length. Finally, out of the blackness bursts a bat, which executes a mid-air backflip to land upside down, hanging at the tunnel’s entrance. The vast majority of experiments probing navigation in the brain have been done in the confines of labs, using earthbound rats and mice. Ulanovsky broke with the convention. He constructed the flight tunnel on a disused plot on the grounds of the Weizmann Institute of Science — the first of several planned arenas — because he wanted to find out how a mammalian brain navigates a more natural environment. In particular, he wanted to know how brains deal with a third dimension. The tunnel, which Ulanovsky built in 2016, has already proved its scientific value. So have the bats. They have helped Ulanovsky to discover new aspects of the complex encoding of navigation — a fundamental brain function essential for survival. He has found a new cell type responsible for the bats’ 3D compass, and other cells that keep track of where other bats are in the environment. It is a hot area of study — navigation researchers won the 2014 Nobel Prize in Physiology or Medicine and the field is an increasingly prominent fixture at every big neuroscience conference. “Nachum’s boldness is impressive,” says Edvard Moser of the Kavli Institute for Systems Neuroscience in Trondheim, Norway, one of the 2014 Nobel laureates. “And it’s paid off — his approach is allowing important new questions to be addressed.” . © 2018 Springer Nature Limited.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25198 - Posted: 07.12.2018

By Elizabeth Pennisi Bats and their prey are in a constant arms race. Whereas the winged mammals home in on insects with frighteningly accurate sonar, some of their prey—such as the tiger moth—fight back with sonar clicks and even jamming signals. Now, in a series of bat-moth skirmishes (above), scientists have shown how other moths create an “acoustic illusion,” with long wing-tails that fool bats into striking the wrong place. The finding helps explain why some moths have such showy tails, and it may also provide inspiration for drones of the future. Moth tails vary from species to species: Some have big lobes at the bottom of the hindwing instead of a distinctive tail; others have just a short protrusion. Still others have long tails that are thin strands with twisted cuplike ends. In 2015, sensory ecologist Jesse Barber of Boise State University in Idaho and colleagues discovered that some silk moths use their tails to confuse bat predators. Now, graduate student Juliette Rubin has shown just what makes the tails such effective deterrents. Working with three species of silk moths—luna, African moon, and polyphemus—Rubin shortened or cut off some of their hindwings and glued longer or differently shaped tails to others. She then tied the moths to a string hanging from the top of a large cage and released a big brown bat (Eptesicus fuscus) inside. She used high-speed cameras and microphones to record the ensuing fight. © 2018 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25173 - Posted: 07.05.2018

A small-molecule drug is one of the first to preserve hearing in a mouse model of an inherited form of progressive human deafness, report investigators at the University of Iowa, Iowa City, and the National Institutes of Health’s National Institute on Deafness and Other Communication Disorders (NIDCD). The study, which appears online in Cell (link is external), sheds light on the molecular mechanism that underlies a form of deafness (DFNA27), and suggests a new treatment strategy. “We were able to partially restore hearing, especially at lower frequencies, and save some sensory hair cells,” said Thomas B. Friedman, Ph.D., chief of the Laboratory of Human Molecular Genetics at the NIDCD, and a coauthor of the study. “If additional studies show that small-molecule-based drugs are effective in treating DFNA27 deafness in people, it’s possible that using similar approaches might work for other inherited forms of progressive hearing loss.” The seed for the advance was planted a decade ago, when NIDCD researchers led by Friedman and Robert J. Morell, Ph.D., another coauthor of the current study, analyzed the genomes of members of an extended family, dubbed LMG2. Deafness is genetically dominant in the LMG2 family, meaning that a child needs to inherit only one copy of the defective gene from a parent to have progressive hearing loss. The investigators localized the deafness-causing mutation to a region on chromosome four called DFNA27, which includes a dozen or so genes. The precise location of the mutation eluded the NIDCD team, however.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25160 - Posted: 06.29.2018

By David Noonan Neuroscientist James Hudspeth has basically been living inside the human ear for close to 50 years. In that time Hudspeth, head of the Laboratory of Sensory Neuroscience at The Rockefeller University, has dramatically advanced scientists’ understanding of how the ear and brain work together to process sound. Last week his decades of groundbreaking research were recognized by the Norwegian Academy of Science, which awarded him the million-dollar Kavli Prize in Neuroscience. Hudspeth shared the prize with two other hearing researchers: Robert Fettiplace from the University of Wisconsin–Madison and Christine Petit from the Pasteur Institute in Paris. Advertisement As Hudspeth explored the neural mechanisms of hearing over the years, he developed a special appreciation for the intricate anatomy of the inner ear—an appreciation that transcends the laboratory. “I think we as scientists tend to underemphasize the aesthetic aspect of science,” he says. “Yes, science is the disinterested investigation into the nature of things. But it is more like art than not. It’s something that one does for the beauty of it, and in the hope of understanding what has heretofore been hidden. Here’s something incredibly beautiful, like the inner ear, performing a really remarkable function. How can that be? How does it do it?” After learning of his Kavli Prize on Thursday, Hudspeth spoke with Scientific American about his work and how the brain transforms physical vibration into the experience of a symphony. © 2018 Scientific American

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25055 - Posted: 06.04.2018

By Abby Olena Activating or suppressing neuronal activity with ultrasound has shown promise both in the lab and the clinic, based on the ability to focus noninvasive, high-frequency sound waves on specific brain areas. But in mice and guinea pigs, it appears that the technique has effects that scientists didn’t expect. In two studies published today (May 24) in Neuron, researchers demonstrate that ultrasound activates the brains of rodents by stimulating an auditory response—not, as researchers had presumed, only the specific neurons where the ultrasound is focused. “These papers are a very good warning to folks who are trying to use ultrasound as a tool to manipulate brain activity,” says Raag Airan, a neuroradiologist and researcher at Stanford University Medical Center who did not participate in either study, but coauthored an accompanying commentary. “In doing these experiments going forward [the hearing component] is something that every single experimenter is going to have to think about and control,” he adds. Over the past decade, researchers have used ultrasound to elicit electrical responses from cells in culture and motor and sensory responses from the brains of rodents and primates. Clinicians have also used so-called ultrasonic neuromodulation to treat movement disorders. But the mechanism by which high frequency sound waves work to exert their influence is not well understood. © 1986-2018 The Scientist

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25025 - Posted: 05.26.2018

By Maya Salam Three years ago, the internet melted down over the color of a dress. Now an audio file has friends, family members and office mates questioning one another’s hearing, and their own. Is the robot voice saying “Yanny” or “Laurel”? The clip picked up steam after a debate erupted on Reddit this week, and it has since been circulated widely on social media. One Reddit user said: “I hear Laurel and everyone is a liar.” “They are saying they hear ‘Yanny’ because they want attention,” a tweet read. Others claimed they heard one word for a while, then the other — or even both simultaneously. It didn’t take long for the auditory illusion to be referred to as “black magic.” And more than one person online yearned for that simpler time in 2015, when no one could decide whether the mother of the bride wore white and gold or blue and black. It was a social media frenzy in which internet trends and traffic on the topic spiked so high that Wikipedia itself now has a simple entry, “The dress.” Of course, in the grand tradition of internet reportage, we turned to a scientist to make this article legitimately newsworthy. Dr. Jody Kreiman, a principal investigator at the voice perception laboratory at the University of California, Los Angeles, helpfully guessed on Tuesday afternoon that “the acoustic patterns for the utterance are midway between those for the two words.” “The energy concentrations for Ya are similar to those for La,” she said. “N is similar to r; I is close to l.” She cautioned, though, that more analysis would be required to sort out the discrepancy. That did not stop online sleuths from trying to find the answer by manipulating the bass, pitch or volume. © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 24983 - Posted: 05.16.2018

By Roni Dengler Hoary bats are habitual squawkers. Sporting frosted brown fur á la Guy Fieri, the water balloon–size bats bark high-pitched yips to navigate the dark night sky by echolocation. But a new study reveals that as they fly, those cries often drop to a whisper, or even silence, suggesting the bats may steer themselves through the darkness with some of the quietest sonar on record. To find out how hoary bats navigate, researchers used infrared cameras and ultrasonic microphones to record scores of them flying through a riverside corridor in California on five autumn nights. In about half of the nearly 80 flights, scientists captured a novel type of call. Shorter, faster, and quieter than their usual calls, the new “micro” calls use three orders of magnitude less sound energy than other bats’ yaps did, the researchers report today in the Proceedings of the Royal Society B. As bats approached objects, they would often quickly increase the volume of their calls. But in close to half the flights, researchers did not pick up any calls at all. This stealth flying mode may explain one sad fact of hoary bat life: They suffer more fatal run-ins with wind turbines than other bat species in North America. The microcalls are so quiet that they reduce the distance over which bats can detect large and small objects by more than three times. That also cuts bats’ reaction time by two-thirds, making them too slow to catch their insect prey. © 2018 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24928 - Posted: 05.02.2018

By Abby Olena At both three and nine weeks after guinea pigs’ cochleae were treated with nanoparticles loaded with Hes1 siRNA, the authors observed what are likely immature hair cells. MODIFIED FROM X. DU ET AL., MOLECULAR THERAPY, 2018Loud sounds, infections, toxins, and aging can all cause hearing loss by damaging so-called hair cells in the cochlea of the inner ear. In a study published today (April 18) in Molecular Therapy, researchers stimulated hair cell renewal with small interfering RNAs (siRNAs) delivered via nanoparticles to the cochlea of adult guinea pigs, restoring some of the animals’ hearing. “There are millions of people suffering from deafness” caused by hair cell loss, says Zheng-Yi Chen, who studies hair cell regeneration at Harvard University and was not involved in the work. “If you can regenerate hair cells, then we really have potential to target treatment for those patients.” Some vertebrates—chickens and zebrafish, for instance—regenerate their hair cells after damage. Hair cells of mammals, on the other hand, don’t sprout anew after being damaged, explaining why injuries can cause life-long hearing impairments. Recent research suggests that there might be a workaround, by manipulating signaling pathways that can lead to hair cell differentiation. That’s where Richard Kopke comes in. © 1986-2018 The Scientist

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24908 - Posted: 04.27.2018

By Jeremy Rehm Whales can sing, buzz, and even whisper to one another, but one thing has remained unknown about these gregarious giants: how they hear. Given the size of some whales and their ocean home, studying even the basics of these mammals has proved challenging. But two researchers have now developed a way to determine how baleen whales such as humpbacks hear their low-frequency (10- to 200-hertz) chatter, and they found some bone-rattling results. Baleen whales have a maze of ear bones that fuse to their skull, leading scientists to suppose the skull helps whales hear. Under this premise, the researchers used a computerized tomography scanner meant for rockets to scan the preserved bodies of a minke whale calf (Balaenoptera acutorostrata) and a fin whale calf (B. physalus), both of which had stranded themselves along U.S. coasts years before and died during rescue operations. Their preserved bodies were kept as scientific specimens. The researchers used these body scans (an example of which is displayed above) to produce 3D computer models to study how the skull responded to different sound frequencies. The skull acts like an antenna, the scientists reported today in San Diego, California, at the 2018 Experimental Biology conference, vibrating as sound waves impact it and then transmitting those vibrations to the whale’s ears. For ease of viewing, the scientists amplified the vibrations 20,000 times. Whale skulls were especially sensitive to the low-frequency sounds they speak with, the researchers found, but large shipping vessels also produce these frequencies. This new information could now help large-scale shipping industries and policymakers establish regulations to minimize the effects of humanmade noise on these ocean giants. © 2018 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24891 - Posted: 04.24.2018

Rachel Ehrenberg BOSTON — Getting your groove on solo with headphones on might be your jam, but it can’t compare with a live concert. Just ask your brain. When people watch live music together, their brains waves synchronize, and this brain bonding is linked with having a better time. The new findings, reported March 27 at a Cognitive Neuroscience Society meeting, are a reminder that humans are social creatures. In western cultures, performing music is generally reserved for the tunefully talented, but this hasn’t been true through much of human history. “Music is typically linked with ritual and in most cultures is associated with dance,” said neuroscientist Jessica Grahn of Western University in London, Canada. “It’s a way to have social participation.” Study participants were split into groups of 20 and experienced music in one of three ways. Some watched a live concert with a large audience, some watched a recording of the concert with a large audience, and some watched the recording with only a few other people. Each person wore EEG caps, headwear covered with electrodes that measure the collective behavior of the brain’s nerve cells. The musicians played an original song they wrote for the study. The delta brain waves of audience members who watched the music live were more synchronized than those of people in the other two groups. Delta brain waves fall in a frequency range that roughly corresponds to the beat of the music, suggesting that beat drives the synchronicity, neuroscientist Molly Henry, a member of Grahn’s lab, reported. The more synchronized a particular audience member was with others, the more he or she reported feeling connected to the performers and enjoying the show. |© Society for Science & the Public 2000 - 2018

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 24800 - Posted: 03.30.2018

By VERONIQUE GREENWOOD Ears are a peculiarly individual piece of anatomy. Those little fleshy seashells, whether they stick out or hang low, can be instantly recognizable in family portraits. And they aren’t just for show. Researchers have discovered that filling in an external part of the ear with a small piece of silicone drastically changes people’s ability to tell whether a sound came from above or below. But given time, the scientists show in a paper published Monday in the Journal of Neuroscience, the brain adjusts to the new shape, regaining the ability to pinpoint sounds with almost the same accuracy as before. Scientists already knew that our ability to tell where a sound is coming from arises in part from sound waves arriving at our ears at slightly different times. If a missing cellphone rings from the couch cushions to your right, the sound reaches your right ear first and your left ear slightly later. Then, your brain tells you where to look. But working out whether a sound is emanating from high up on a bookshelf or under the coffee table is not dependent on when the sound reaches your ears. Instead, said Régis Trapeau, a neuroscientist at the University of Montreal and author of the new paper, the determination involves the way the sound waves bounce off outer parts of your ear. Curious to see how the brain processed this information, the researchers set up a series of experiments using a dome of speakers, ear molds made of silicone and an fMRI machine to record brain activity. Before being fitted with the pieces of silicone, volunteers heard a number of sounds played around them and indicated where they thought the noises were coming from. In the next session, the same participants listened to the same sounds with the ear molds in. This time it was clear that something was very different. © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24727 - Posted: 03.07.2018

By DOUGLAS QUENQUA Claudio Mello was conducting research in Brazil’s Atlantic Forest about 20 years ago when he heard a curious sound. It was high-pitched and reedy, like a pin scratching metal. A cricket? A tree frog? No, a hummingbird. At least that’s what Dr. Mello, a behavioral neuroscientist at Oregon Health and Science University, concluded at the time. Despite extensive deforestation, the Atlantic Forest is one of Earth’s great cradles of biological diversity. It is home to about 2,200 species of animals, including about 40 species of hummingbirds. The variety of hummingbirds makes it difficult to isolate specific noises without sophisticated listening or recording devices. In 2015, Dr. Mello returned to the forest with microphones used to record high-frequency bat noises. The recordings he made confirmed that the calls were coming from black jacobin hummingbirds. The species is found in other parts of South America, too, and researchers are unsure whether the sound is emitted by males, females or both, although they have confirmed that juvenile black jacobins do not make them. When Dr. Mello and his team analyzed the noise — a triplet of syllables produced in rapid succession — they discovered it was well above the normal hearing range of birds. Peak hearing sensitivity for most birds is believed to rest between two to three kilohertz. (Humans are most sensitive to noises between one and four kilohertz.) “No one has ever described that a bird can hear even above 8, 9 kilohertz,” said Dr. Mello. But “the fundamental frequency of those calls was above 10 kilohertz,” he said. “That’s what was really amazing.” © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Lateralization
Link ID: 24725 - Posted: 03.06.2018

By CHRISTOPHER MELE A persistent noise of unknown origin, sometimes compared to a truck idling or distant thunder, has bedeviled a Canadian city for years, damaging people’s health and quality of life, numerous residents say. Those who hear it have compared it to a fleet of diesel engines idling next to your home or the pulsation of a subwoofer at a concert. Others report it rattling their windows and spooking their pets. Known as the Windsor Hum, this sound in Windsor, Ontario, near Detroit, is unpredictable in its duration, timing and intensity, making it all the more maddening for those affected. “You know how you hear of people who have gone out to secluded places to get away from certain sounds or noises and the like?” Sabrina Wiese posted in a private Facebook group dedicated to finding the source of the noise. “I’ve wanted to do that many times in the past year or so because it has gotten so bad,” she wrote. “Imagine having to flee all you know and love just to have a chance to hear nothing humming in your head for hours on end.” Since reports of it surfaced in 2011, the hum has been studied by the Canadian government, the University of Western Ontario and the University of Windsor. Activists have done their own sleuthing. Over six years, Mike Provost of Windsor, who helps run the Facebook page, has amassed more than 4,000 pages of daily observations about the duration, intensity and characteristics of the sound and the weather conditions at the time. © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Lateralization
Link ID: 24681 - Posted: 02.19.2018

Dan Garisto If you’ve ever felt the urge to tap along to music, this research may strike a chord. Recognizing rhythms doesn’t involve just parts of the brain that process sound — it also relies on a brain region involved with movement, researchers report online January 18 in the Journal of Cognitive Neuroscience. When an area of the brain that plans movement was disabled temporarily, people struggled to detect changes in rhythms. The study is the first to connect humans’ ability to detect rhythms to the posterior parietal cortex, a brain region associated with planning body movements as well as higher-level functions such as paying attention and perceiving three dimensions. “When you’re listening to a rhythm, you’re making predictions about how long the time interval is between the beats and where those sounds will fall,” says coauthor Jessica Ross, a neuroscience graduate student at the University of California, Merced. These predictions are part of a system scientists call relative timing, which helps the brain process repetitive sounds, like a musical rhythm. “Music is basically sounds that have a structure in time,” says Sundeep Teki, a neuroscientist at the University of Oxford who was not involved with the study. Studies like this, which investigate where relative timing takes place in the brain, could be crucial to understanding how the brain deciphers music, he says. |© Society for Science & the Public 2000 - 2018.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 5: The Sensorimotor System
Link ID: 24675 - Posted: 02.17.2018

Dana Boebinger Roughly 15 percent of Americans report some sort of hearing difficulty; trouble understanding conversations in noisy environments is one of the most common complaints. Unfortunately, there’s not much doctors or audiologists can do. Hearing aids can amplify things for ears that can’t quite pick up certain sounds, but they don’t distinguish between the voice of a friend at a party and the music in the background. The problem is not only one of technology, but also of brain wiring. Most hearing aid users say that even with their hearing aids, they still have difficulty communicating in noisy environments. As a neuroscientist who studies speech perception, this issue is prominent in much of my own research, as well as that of many others. The reason isn’t that they can’t hear the sounds; it’s that their brains can’t pick out the conversation from the background chatter. Harvard neuroscientists Dan Polley and Jonathon Whitton may have found a solution, by harnessing the brain’s incredible ability to learn and change itself. They have discovered that it may be possible for the brain to relearn how to distinguish between speech and noise. And the key to learning that skill could be a video game. People with hearing aids often report being frustrated with how their hearing aids handle noisy situations; it’s a key reason many people with hearing loss don’t wear hearing aids, even if they own them. People with untreated hearing loss – including those who don’t wear their hearing aids – are at increased risk of social isolation, depression and even dementia. © 2010–2018, The Conversation US, Inc.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 24618 - Posted: 02.06.2018