Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 607

By Lenny Bernstein Forty million American adults have lost some hearing because of noise, and half of them suffered the damage outside the workplace, from everyday exposure to leaf blowers, sirens, rock concerts and other loud sounds, the Centers for Disease Control and Prevention reported Tuesday. A quarter of people ages 20 to 69 were suffering some hearing deficits, the CDC reported in its Morbidity and Mortality Weekly Report, even though the vast majority of the people in the study claimed to have good or excellent hearing. The researchers found that 24 percent of adults had “audiometric notches” — a deterioration in the softest sound a person can hear — in one or both ears. The data came from 3,583 people who had undergone hearing tests and reported the results in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The review's more surprising finding — which the CDC had not previously studied — was that 53 percent of those people said they had no regular exposure to loud noise at work. That means the hearing loss was caused by other environmental factors, including listening to music through headphones with the volume turned up too high. “Noise is damaging hearing before anyone notices or diagnoses it,” said Anne Schuchat, the CDC's acting director. “Because of that, the start of hearing loss is underrecognized.” The study revealed that 19 percent of people between the ages of 20 and 29 had some hearing loss, a finding that Schuchat called alarming. © 1996-2017 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23197 - Posted: 02.08.2017

By JANE E. BRODY Dizziness is not a disease but rather a symptom that can result from a huge variety of underlying disorders or, in some cases, no disorder at all. Readily determining its cause and how best to treat it — or whether to let it resolve on its own — can depend on how well patients are able to describe exactly how they feel during a dizziness episode and the circumstances under which it usually occurs. For example, I recently experienced a rather frightening attack of dizziness, accompanied by nausea, at a food and beverage tasting event where I ate much more than I usually do. Suddenly feeling that I might faint at any moment, I lay down on a concrete balcony for about 10 minutes until the disconcerting sensations passed, after which I felt completely normal. The next morning I checked the internet for my symptom — dizziness after eating — and discovered the condition had a name: Postprandial hypotension, a sudden drop in blood pressure when too much blood is diverted to the digestive tract, leaving the brain relatively deprived. The condition most often affects older adults who may have an associated disorder like diabetes, hypertension or Parkinson’s disease that impedes the body’s ability to maintain a normal blood pressure. Fortunately, I am thus far spared any disorder linked to this symptom, but I’m now careful to avoid overeating lest it happen again. “An essential problem is that almost every disease can cause dizziness,” say two medical experts who wrote a comprehensive new book, “Dizziness: Why You Feel Dizzy and What Will Help You Feel Better.” Although the vast majority of patients seen at dizziness clinics do not have a serious health problem, the authors, Dr. Gregory T. Whitman and Dr. Robert W. Baloh, emphasize that doctors must always “be on the alert for a serious disease presenting as ‘dizziness,’” like “stroke, transient ischemic attacks, multiple sclerosis and brain tumors.” © 2017 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23196 - Posted: 02.07.2017

By James Gallagher Health and science reporter, BBC News website Deaf mice have been able to hear a tiny whisper after being given a "landmark" gene therapy by US scientists. They say restoring near-normal hearing in the animals paves the way for similar treatments for people "in the near future". Studies, published in Nature Biotechnology, corrected errors that led to the sound-sensing hairs in the ear becoming defective. The researchers used a synthetic virus to nip in and correct the defect. "It's unprecedented, this is the first time we've seen this level of hearing restoration," said researcher Dr Jeffrey Holt, from Boston Children's Hospital. Hair defect About half of all forms of deafness are due to an error in the instructions for life - DNA. In the experiments at Boston Children's Hospital and Harvard Medical School, the mice had a genetic disorder called Usher syndrome. It means there are inaccurate instructions for building microscopic hairs inside the ear. In healthy ears, sets of outer hair cells magnify sound waves and inner hair cells then convert sounds to electrical signals that go to the brain. The hairs normally form these neat V-shaped rows. But in Usher syndrome they become disorganised - severely affecting hearing. The researchers developed a synthetic virus that was able to "infect" the ear with the correct instructions for building hair cells. © 2017 BBC.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23195 - Posted: 02.07.2017

By Tiffany O'Callaghan Imagine feeling angry or upset whenever you hear a certain everyday sound. It’s a condition called misophonia, and we know little about its causes. Now there’s evidence that misophonics show distinctive brain activity whenever they hear their trigger sounds, a finding that could help devise coping strategies and treatments. Olana Tansley-Hancock knows misophonia’s symptoms only too well. From the age of about 7 or 8, she experienced feelings of rage and discomfort whenever she heard the sound of other people eating. By adolescence, she was eating many of her meals alone. As time wore on, many more sounds would trigger her misophonia. Rustling papers and tapping toes on train journeys constantly forced her to change seats and carriages. Clacking keyboards in the office meant she was always making excuses to leave the room. Finally, she went to a doctor for help. “I got laughed at,” she says. “People who suffer from misophonia often have to make adjustments to their lives, just to function,” says Miren Edelstein at the University of California, San Diego. “Misophonia seems so odd that it’s difficult to appreciate how disabling it can be,” says her colleague, V. S. Ramachandran. The condition was first given the name misophonia in 2000, but until 2013, there had only been two case studies published. More recently, clear evidence has emerged that misophonia isn’t a symptom of other conditions, such as obsessive compulsive disorder, nor is it a matter of being oversensitive to other people’s bad manners. Some studies, including work by Ramachandran and Edelstein, have found that trigger sounds spur a full fight-or-flight response in people with misophonia. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Consciousness
Link ID: 23181 - Posted: 02.03.2017

By Marcy Cuttler, CBC News Imagine waking up suddenly deaf in one ear. Musician and composer Richard Einhorn has lived through it. In June 2010, the 64-year-old New Yorker awoke to his ears ringing. "The first thing you think of, of course, is a brain tumour or a stroke," he said. At the time, he was in upstate Massachusetts, far from help. So he called a cab and went to the closest hospital. Doctors eventually told him it was sudden sensorineural hearing loss (SSHL) — a little-known and not well understood condition that affects one person per 5,000 every year according to the U.S. National Institutes of Health. What doctors do know: that most people diagnosed with it are between the ages of 40 and 60; that men and women can be equally afflicted; and that it usually only impacts one ear. Einhorn, who couldn't hear well in his other ear due to a pre-existing condition, was left completely deaf. "It was incredibly difficult to communicate with anybody ... we were doing it with notes," he said. "I wouldn't recommend it on my worst enemy. It was really, really terrible." Dr. James Bonaparte says if you wake up with ringing in your ears that continues throughout the day, or if you notice a drop in hearing on one side — and you don't have a cold at the time — get checked. (CBC) ©2017 CBC/Radio-Canada

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23153 - Posted: 01.27.2017

By JONAH ENGEL BROMWICH I have a big, dumb, deep, goofy voice. But I’m reminded of it only when I hear a recording of myself while playing back an interview — or when friends do impressions of me, lowering their voices several octaves. My high school classmate Walter Suskind has one of the deepest voices I’ve ever heard in person. His experience has been similar to mine. “My voice sounds pretty normal in my head,” he said. “It’s when I catch the echo on the back of the phone or when I hear myself when it’s been taped that I realize how deep it is. Also, when people come up to me and, to imitate my voice, go as deep as they possibly can and growl in my face.” He added, “I’ve been told that the one advantage to voices like ours is we make really good hostage negotiators.” (Here’s Walter on an episode of “Radiolab.” His segment starts at about 12:20, and the host immediately comments on his voice.) Many people have heard their recorded voices and reeled in disgust (“Do I really sound like that?”) Others are surprised how high their voices sound. The indie musician Mitski Miyawaki, who has earned praise for exceptional control over her singing voice, said that she, too, is often unpleasantly surprised by her speaking voice, which she perceives as “lower, more commanding,” than it sounds to others. “And then I listen to a radio interview and I’m like ‘uuuch,’ ” she said, making a disgusted noise. “I listen to my voice and I go, ‘Oh it sounds exactly like a young girl.’ ” There’s an easy explanation for experiences like Ms. Miyawaki’s, said William Hartmann, a physics professor at Michigan State University who specializes in acoustics and psychoacoustics. There are two pathways through which we perceive our own voice when we speak, he explained. One is the route through which we perceive most other sounds. Waves travel from the air through the chain of our hearing systems, traversing the outer, middle and inner ear. © 2017 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23107 - Posted: 01.16.2017

Susan Milius NEW ORLEANS — The self-cleaning marvel known as earwax may turn the dust particles it traps into agents of their own disposal. Earwax, secreted in the ear canal, protects ears from building up dunes of debris from particles wafting through the air. The wax creates a sticky particle-trapper inside the canal, explained Zac Zachow January 6 at the annual meeting of the Society of Integrative and Comparative Biology. The goo coats hairs and haphazardly pastes them into a loose net. Then, by a process not yet fully understood, bits of particle-dirtied wax leave the ear, taking their burden of debris with them. Earwax may accomplish such a feat because trapping more and more dust turns it from gooey to crumbly, Zachow said. Working with Alexis Noel in David Hu’s lab at Georgia Tech in Atlanta, he filmed a rough demonstration of this idea: Mixing flour into a gob of pig’s earwax eventually turned the lump from stickier to drier, with crumbs fraying away at the edges. Jaw motions might help shake loose these crumbs, Zachow said. A video inside the ear of someone eating a doughnut showed earwax bucking and shifting. This dust-to-crumb scenario needs more testing, but Noel points out that earwax might someday inspire new ways of reducing dust buildup in machinery such as home air-filtration systems. Z. Zachow, A. Noel and D.L. Hu. Earwax has properties like paint, enabling self-cleaning. Annual meeting of the Society for Integrative and Comparative Biology, New Orleans, January 6, 2017. © Society for Science & the Public 2000 - 2017

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23100 - Posted: 01.14.2017

By Joshua Rapp Learn The Vietnamese pygmy dormouse is as blind as a bat—and it navigates just like one, too. Scientists have found that the small, nimble brown rodent (Typhlomys cinereus chapensis), native to Vietnam and parts of China, uses sound waves to get a grip on its environment. Measurements of the mice in the Moscow Zoo revealed that the species can't see objects because of a folded retina and a low number of neurons capable of collecting visual information, among other things. When researchers recorded the animals, they discovered they make ultrasonic noises similar to those used by some bat species, and videos showed they made the sounds at a much greater pulse rate when moving than while resting. These sound waves bounce off objects, allowing the rodent to sense its surroundings—an ability known as echolocation, or biological sonar. The find makes the dormouse the only tree-climbing mammal known to use ultrasonic echolocation, the team reports in Integrative Zoology. The authors suggest that an extinct ancestor of these dormice was likely a leaf bed–dwelling animal that lost the ability to see in the darkness in which it is active. As the animals began to move up into the trees over time, they likely developed the ultrasonic echolocation abilities to help them deal with a new acrobatic lifestyle. The discovery lends support to the idea that bats may have evolved echolocation before the ability to fly. © 2017 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23080 - Posted: 01.11.2017

By Sara Reardon The din of what sounds like a high-pitched cocktail party fills the lab of neuroscientist Xiaoqin Wang at Johns Hopkins University in Baltimore. But the primates making the racket are dozens of marmosets, squirrel-sized monkeys with patterned coats and white puffs of fur on either side of their heads. The animals chatter to each other, stopping to tilt their heads and consider their visitors with inquisitive expressions. Common marmosets (Callithrix jacchus) are social and communicative in captivity, unlike the macaque that is more commonly used as a model primate. And in January, Wang and his colleagues revealed that marmosets are also the only non-human animal that can hear different pitches, such as those found in music and tonal languages like Chinese, in the same way people can1. This makes the marmoset the closest proxy researchers have to the human brain when it comes to hearing and speech, says Quianjie Fu, an auditory researcher at the University of California, Los Angeles, who was not involved with the paper. Until recently, researchers have relied on songbirds for such work, but the birds’ brains are so different from human ones that the insights they provide are limited. Wang hopes that marmosets will improve researchers’ understanding of the evolution of communication and help them refine devices such as cochlear implants for deaf people. © 2016 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23000 - Posted: 12.20.2016

By Catherine Matacic Have you ever wondered why a strange piece of music can feel familiar—how it is, for example, that you can predict the next beat even though you’ve never heard the song before? Music everywhere seems to share some “universals,” from the scales it uses to the rhythms it employs. Now, scientists have shown for the first time that people without any musical training also create songs using predictable musical beats, suggesting that humans are hardwired to respond to—and produce—certain features of music. “This is an excellent and elegant paper,” says Patrick Savage, an ethnomusicologist at the Tokyo University of the Arts who was not involved in the study. “[It] shows that even musical evolution obeys some general rules [similar] to the kind that govern biological evolution.” Last year, Savage and colleagues traced that evolution by addressing a fundamental question: What aspects of music are consistent across cultures? They analyzed hundreds of musical recordings from around the world and identified 18 features that were widely shared across nine regions, including six related to rhythm. These “rhythmic universals” included a steady beat, two- or three-beat rhythms (like those in marches and waltzes), a preference for two-beat rhythms, regular weak and strong beats, a limited number of beat patterns per song, and the use of those patterns to create motifs, or riffs. “That was a really remarkable job they did,” says Andrea Ravignani, a cognitive scientist at the Vrije Universiteit Brussel in Belgium. “[It convinced me that] the time was ripe to investigate this issue of music evolution and music universals in a more empirical way.” © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22997 - Posted: 12.20.2016

By GINA KOLATA As concern rises over the effect of continuous use of headphones and earbuds on hearing, a new paper by federal researchers has found something unexpected. The prevalence of hearing loss in Americans of working age has declined. The paper, published on Thursday in the journal JAMA Otolaryngology — Head & Neck Surgery, used data from the National Health and Nutrition Survey, which periodically administers health tests to a representative sample of the population. The investigators, led by Howard J. Hoffman, the director of the epidemiology and statistics program at the National Institute on Deafness and Other Communication Disorders, compared data collected between 1999 and 2004 with data from 2011 and 2012, the most recent available. Hearing loss in this study meant that a person could not hear, in at least one ear, a sound about as loud as rustling leaves. The researchers reported that while 15.9 percent of the population studied in the earlier period had problems hearing, just 14.1 percent of the more recent group had hearing loss. The good news is part of a continuing trend — Americans’ hearing has gotten steadily better since 1959. Most surprising to Mr. Hoffman, a statistician, was that even though the total population of 20- to 69-year-olds grew by 20 million over the time period studied — and the greatest growth was in the oldest people, a group most likely to have hearing problems — the total number of people with hearing loss fell, from 28 million to 27.7 million. Hearing experts who were not associated with the study said they were utterly convinced by the results. “It’s a fantastic paper,” said Brian Fligor, an audiologist with Lantos Technologies of Wakefield, Mass., which develops custom earpieces to protect ears from noise. “I totally believe them.” © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22996 - Posted: 12.17.2016

By CATHERINE SAINT LOUIS These days, even 3-year-olds wear headphones, and as the holidays approach, retailers are well stocked with brands that claim to be “safe for young ears” or to deliver “100 percent safe listening.” The devices limit the volume at which sound can be played; parents rely on them to prevent children from blasting, say, Rihanna at hazardous levels that could lead to hearing loss. But a new analysis by The Wirecutter, a product recommendations website owned by The New York Times, has found that half of 30 sets of children’s headphones tested did not restrict volume to the promised limit. The worst headphones produced sound so loud that it could be hazardous to ears in minutes. “These are terribly important findings,” said Cory Portnuff, a pediatric audiologist at the University of Colorado Hospital, who was not involved in the analysis. “Manufacturers are making claims that aren’t accurate.” The new analysis should be a wake-up call to parents who thought volume-limiting technology offered adequate protection, said Dr. Blake Papsin, the chief otolaryngologist at the Hospital for Sick Children in Toronto. “Headphone manufacturers aren’t interested in the health of your child’s ears,” he said. “They are interested in selling products, and some of them are not good for you.” Half of 8- to 12-year-olds listen to music daily, and nearly two-thirds of teenagers do, according to a 2015 report with more than 2,600 participants. Safe listening is a function of both volume and duration: The louder a sound, the less time you should listen to it. It’s not a linear relationship. Eighty decibels is twice as loud as 70 decibels, and 90 decibels is four times louder. Exposure to 100 decibels, about the volume of noise caused by a power lawn mower, is safe for just 15 minutes; noise at 108 decibels, however, is safe for less than three minutes. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22953 - Posted: 12.06.2016

Ramin Skibba Bats sing just as birds and humans do. But how they learn their melodies is a mystery — one that scientists will try to solve by sequencing the genomes of more than 1,000 bat species. The project, called Bat 1K, was announced on 14 November at the annual meeting of the Society for Neuroscience in San Diego, California. Its organizers also hope to learn more about the flying mammals’ ability to navigate in the dark through echolocation, their strong immune systems that can shrug off Ebola and their relatively long lifespans. “The genomes of all these other species, like birds and mice, are well-understood,” says Sonja Vernes, a neurogeneticist at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands, and co-director of the project. “But we don’t know anything about bat genes yet.” Some bats show babbling behaviour, including barks, chatter, screeches, whistles and trills, says Mirjam Knörnschild, a behavioural ecologist at Free University Berlin, Germany. Young bats learn the songs and sounds from older male tutors. They use these sounds during courtship and mating, when they retrieve food and as they defend their territory against rivals. Scientists have studied the songs of only about 50 bat species so far, Knörnschild says, and they know much less about bat communication than about birds’. Four species of bats have so far been found to learn songs from each other, their fathers and other adult males, just as a child gradually learns how to speak from its parents1. © 2016 Macmillan Publishers Limited,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 22888 - Posted: 11.19.2016

By ARNAUD COLINART, AMAURY LA BURTHE, PETER MIDDLETON and JAMES SPINNEY “What is the world of sound?” So begins a diary entry from April 1984, recorded on audiocassette, about the nature of acoustic experience. The voice on the tape is that of the writer and theologian John Hull, who at the time of the recording had been totally blind for almost two years. After losing his sight in his mid-40s, Dr. Hull, a newlywed with a young family, had decided that blindness would destroy him if he didn’t learn to understand it. For three years he recorded his experiences of sight loss, documenting “a world beyond sight.” We first met Dr. Hull in 2011, having read his acclaimed 1991 book “Touching The Rock: An Experience of Blindness,” which was transcribed from his audio diaries. We began collaborating with him on a series of films using his original recordings. These included an Emmy-winning Op-Doc in 2014 and culminated in the feature-length documentary “Notes on Blindness.” But we were also interested in how interactive forms of storytelling might further explore Dr. Hull’s vast and detailed account — in particular how new mediums like virtual reality could illuminate his investigations into auditory experience. The diaries describe his evolving appreciation of “the breadth and depth of three-dimensional world that is revealed by sound,” the awakening of an acoustic perception of space. The sound of falling rain, he said, “brings out the contours of what is around you”; wind brings leaves and trees to life; thunder “puts a roof over your head.” This interactive experience is narrated by Dr. Hull, using extracts from his diary recordings to consider the nature of acoustic space. Binaural techniques map the myriad details of everyday life (in this case, the noises that surround Dr. Hull in a park) within a 3-D sound environment, a “panorama of music and information,” rich in color and texture. The real-time animation visualizes this multilayered soundscape in which, Dr. Hull says, “every sound is a point of activity.” © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 22880 - Posted: 11.17.2016

By Rachel Feltman and Sarah Kaplan Dear Science, I just got a new iPhone and can't decide what kind of headphones I should be using. I read somewhere that ear buds are worse for you than headphones that fit over your ear. Is that true? I don't want to damage my hearing by using the wrong thing. Here's what science has to say: At the end of the day, nothing really matters but volume. No pair of headphones is inherently “good” or “bad” for your hearing. But picking the right headphones can help you listen to your music more responsibly. The louder a sound is, the more quickly it can cause injury to your ears. If you're not careful, a powerful sound wave can actually tear right through your delicate eardrum, but that's unlikely to happen while blasting music. Most hearing loss is the result of nerve damage, and your smartphone is more than capable of wrecking your ears that way. You can be exposed to 85 decibels — the noise of busy city traffic — pretty much all day without causing nerve damage, but things quickly become dangerous once you get louder than that. At 115 decibels, which is about the noise level produced at a rock concert or by a chain saw, nerve damage can happen in less than a minute. You might not immediately notice significant hearing loss as the result of that nerve damage, but it will add up over time. Some smartphones can crank music to 120 decibels. If you listened to an entire album at that volume, you might have noticeable hearing loss by the time you took off your headphones. According to the World Health Organization, 1.1 billion teens and young adults globally are at risk of developing hearing loss because of these “personal audio devices.” You already know the solution, folks: Turn that music down. © 1996-2016 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22867 - Posted: 11.15.2016

by Helen Thompson Narwhals use highly targeted beams of sound to scan their environment for threats and food. In fact, the so-called unicorns of the sea (for their iconic head tusks) may produce the most refined sonar of any living animal. A team of researchers set up 16 underwater microphones to eavesdrop on narwhal click vocalizations at 11 ice pack sites in Greenland’s Baffin Bay in 2013. The recordings show that narwhal clicks are extremely intense and directional — meaning they can widen and narrow the beam of sound to find prey over long and short distances. It’s the most directional sonar signal measured in a living species, the researchers report November 9 in PLOS ONE. The sound beams are also asymmetrically narrow on top. That minimizes clutter from echoes bouncing off the sea surface or ice pack. Finally, narwhals scan vertically as they dive, which could help them find patches of open water where they can surface and breathe amid sea ice cover. All this means that narwhals employ pretty sophisticated sonar. The audio data could help researchers tell the difference between narwhal vocalizations and those of neighboring beluga whales. It also provides a baseline for assessing the potential impact of noise pollution from increases in shipping traffic made possible by sea ice loss. |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22856 - Posted: 11.12.2016

By Jessica Boddy You’d probably never notice a jumping spider across your living room, but it would surely notice you. The arachnids are known for their brilliant eyesight, and a new study shows they have even greater sensory prowess than we thought: Jumping spiders can hear sounds even though they don’t have ears—or even eardrums. To find this out, researchers implanted tiny electrodes in a region of spiders’ brains that would show whether sound was being processed. Then they placed the spiders on a specially designed box to eliminate any vibrations from below—most spiders sense their surroundings through vibrations—and scared the heck out of them with a speaker-produced buzz of one of their predators, the mud dauber wasp. An out-of-earshot, high-frequency buzz and a silent control elicited no response from the spiders. But the 80-hertz wasp buzz made them freeze and look around, startled, just as they would do in the wild. What’s more, data from the electrodes showed a spike in brain activity with each buzz, revealing that spiders actually hear sounds, from a swooping mud dauber wasp to you crunching potato chips on your couch. The researchers, who publish their work today in Current Biology, say further study is needed to see exactly how spiders receive sounds without eardrums, but they believe sensitive hairs on their legs play a part. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22755 - Posted: 10.15.2016

Dean Burnett You remember that time a children’s TV presenter, one who has been working in children’s television for decades and is now employed on a channel aimed at under-8-year-olds, decided to risk it all and say one of the worst possible swear words on a show for pre-schoolers that he is famous for co-hosting? Remember how he took a huge risk for no appreciable gain and uttered a context-free profanity to an audience of toddlers? How he must have wanted to swear on children’s TV but paradoxically didn’t want anyone to notice so “snuck it in” as part of a song, where it would be more ambiguous? How all the editors and regulators at the BBC happened to completely miss it and allow it to be aired? Remember this happening? Well you shouldn’t, because it clearly didn’t. No presenter and/or channel would risk their whole livelihood in such a pointless, meaningless way, especially not the ever-pressured BBC. And, yet, an alarming number of people do think it happened. Apparently, there have been some “outraged parents” who are aghast at the whole thing. This seems reasonable in some respects; if your toddler was subjected to extreme cursing then as a parent you probably would object. On the other hand, if your very small child is able to recognise strong expletives, then perhaps misheard lyrics on cheerful TV shows aren’t the most pressing issue in their life. Regardless, a surprising number of people report that they did genuinely “hear” the c-word. This is less likely to be due to a TV presenter having some sort of extremely-fleeting breakdown, and more likely due to the quirks and questionable processing of our senses by our powerful yet imperfect brains. © 2016 Guardian News and Media Limited

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Consciousness
Link ID: 22662 - Posted: 09.17.2016

By Marlene Cimons Former president Jimmy Carter, 91, told the New Yorker recently that 90 percent of the arguments he has with Rosalynn, his wife of 70 years, are about hearing. “When I tell her, ‘Please speak more loudly,’ she absolutely refuses to speak more loudly, or to look at me when she talks,” he told the magazine. In response, the former first lady, 88, declared that having to repeat things “drives me up the wall.” Yet after both went to the doctor, much to her surprise, “I found out it was me!” she said. “I was the one who was deaf.” Hearing loss is like that. It comes on gradually, often without an individual’s realizing it, and it prompts a range of social and health consequences. “You don’t just wake up with a sudden hearing loss,” says Barbara Kelley, executive director of the Hearing Loss Association of America. “It can be insidious. It can creep up on you. You start coping, or your spouse starts doing things for you, like making telephone calls.” An estimated 25 percent of Americans between ages 60 and 69 have some degree of hearing loss, according to the President’s Council of Advisors on Science and Technology. That percentage grows to more than 50 percent for those age 70 to 79, and to almost 80 percent of individuals older than 80. That’s about 30 million people, a number likely to increase as our population ages. Behind these statistics are disturbing repercussions such as social isolation and the inability to work, travel or be physically active.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22561 - Posted: 08.16.2016

Cassie Martin Understanding sea anemones’ exceptional healing abilities may help scientists figure out how to restore hearing. Proteins that the marine invertebrates use to repair damaged cells can also repair mice’s sound-sensing cells, a new study shows. The findings provide insights into the mechanics of hearing and could lead to future treatments for traumatic hearing loss, researchers report in the Aug. 1 Journal of Experimental Biology. “This is a preliminary step, but it’s a very useful step in looking at restoring the structure and function of these damaged cells,” says Lavinia Sheets, a hearing researcher at Harvard Medical School who was not involved in the study. Tentacles of starlet sea anemones (Nematostella vectensis) are covered in tiny hairlike cells that sense vibrations in the water from prey swimming nearby.The cells are similar to sound-sensing cells found in the ears of humans and other mammals. When loud noises damage or kill these hair cells, the result can range from temporary to permanent hearing loss. Anemones’ repair proteins restore their damaged hairlike cells, but landlubbing creatures aren’t as lucky. Glen Watson, a biologist at the University of Louisiana at Lafayette, wondered if anemones’ proteins — which have previously been shown to mend similar cells in blind cave fish — might also work in mammals. |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 22550 - Posted: 08.12.2016