Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 618

By Catherine Matacic Have you ever wondered why a strange piece of music can feel familiar—how it is, for example, that you can predict the next beat even though you’ve never heard the song before? Music everywhere seems to share some “universals,” from the scales it uses to the rhythms it employs. Now, scientists have shown for the first time that people without any musical training also create songs using predictable musical beats, suggesting that humans are hardwired to respond to—and produce—certain features of music. “This is an excellent and elegant paper,” says Patrick Savage, an ethnomusicologist at the Tokyo University of the Arts who was not involved in the study. “[It] shows that even musical evolution obeys some general rules [similar] to the kind that govern biological evolution.” Last year, Savage and colleagues traced that evolution by addressing a fundamental question: What aspects of music are consistent across cultures? They analyzed hundreds of musical recordings from around the world and identified 18 features that were widely shared across nine regions, including six related to rhythm. These “rhythmic universals” included a steady beat, two- or three-beat rhythms (like those in marches and waltzes), a preference for two-beat rhythms, regular weak and strong beats, a limited number of beat patterns per song, and the use of those patterns to create motifs, or riffs. “That was a really remarkable job they did,” says Andrea Ravignani, a cognitive scientist at the Vrije Universiteit Brussel in Belgium. “[It convinced me that] the time was ripe to investigate this issue of music evolution and music universals in a more empirical way.” © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22997 - Posted: 12.20.2016

By GINA KOLATA As concern rises over the effect of continuous use of headphones and earbuds on hearing, a new paper by federal researchers has found something unexpected. The prevalence of hearing loss in Americans of working age has declined. The paper, published on Thursday in the journal JAMA Otolaryngology — Head & Neck Surgery, used data from the National Health and Nutrition Survey, which periodically administers health tests to a representative sample of the population. The investigators, led by Howard J. Hoffman, the director of the epidemiology and statistics program at the National Institute on Deafness and Other Communication Disorders, compared data collected between 1999 and 2004 with data from 2011 and 2012, the most recent available. Hearing loss in this study meant that a person could not hear, in at least one ear, a sound about as loud as rustling leaves. The researchers reported that while 15.9 percent of the population studied in the earlier period had problems hearing, just 14.1 percent of the more recent group had hearing loss. The good news is part of a continuing trend — Americans’ hearing has gotten steadily better since 1959. Most surprising to Mr. Hoffman, a statistician, was that even though the total population of 20- to 69-year-olds grew by 20 million over the time period studied — and the greatest growth was in the oldest people, a group most likely to have hearing problems — the total number of people with hearing loss fell, from 28 million to 27.7 million. Hearing experts who were not associated with the study said they were utterly convinced by the results. “It’s a fantastic paper,” said Brian Fligor, an audiologist with Lantos Technologies of Wakefield, Mass., which develops custom earpieces to protect ears from noise. “I totally believe them.” © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22996 - Posted: 12.17.2016

By CATHERINE SAINT LOUIS These days, even 3-year-olds wear headphones, and as the holidays approach, retailers are well stocked with brands that claim to be “safe for young ears” or to deliver “100 percent safe listening.” The devices limit the volume at which sound can be played; parents rely on them to prevent children from blasting, say, Rihanna at hazardous levels that could lead to hearing loss. But a new analysis by The Wirecutter, a product recommendations website owned by The New York Times, has found that half of 30 sets of children’s headphones tested did not restrict volume to the promised limit. The worst headphones produced sound so loud that it could be hazardous to ears in minutes. “These are terribly important findings,” said Cory Portnuff, a pediatric audiologist at the University of Colorado Hospital, who was not involved in the analysis. “Manufacturers are making claims that aren’t accurate.” The new analysis should be a wake-up call to parents who thought volume-limiting technology offered adequate protection, said Dr. Blake Papsin, the chief otolaryngologist at the Hospital for Sick Children in Toronto. “Headphone manufacturers aren’t interested in the health of your child’s ears,” he said. “They are interested in selling products, and some of them are not good for you.” Half of 8- to 12-year-olds listen to music daily, and nearly two-thirds of teenagers do, according to a 2015 report with more than 2,600 participants. Safe listening is a function of both volume and duration: The louder a sound, the less time you should listen to it. It’s not a linear relationship. Eighty decibels is twice as loud as 70 decibels, and 90 decibels is four times louder. Exposure to 100 decibels, about the volume of noise caused by a power lawn mower, is safe for just 15 minutes; noise at 108 decibels, however, is safe for less than three minutes. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22953 - Posted: 12.06.2016

Ramin Skibba Bats sing just as birds and humans do. But how they learn their melodies is a mystery — one that scientists will try to solve by sequencing the genomes of more than 1,000 bat species. The project, called Bat 1K, was announced on 14 November at the annual meeting of the Society for Neuroscience in San Diego, California. Its organizers also hope to learn more about the flying mammals’ ability to navigate in the dark through echolocation, their strong immune systems that can shrug off Ebola and their relatively long lifespans. “The genomes of all these other species, like birds and mice, are well-understood,” says Sonja Vernes, a neurogeneticist at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands, and co-director of the project. “But we don’t know anything about bat genes yet.” Some bats show babbling behaviour, including barks, chatter, screeches, whistles and trills, says Mirjam Knörnschild, a behavioural ecologist at Free University Berlin, Germany. Young bats learn the songs and sounds from older male tutors. They use these sounds during courtship and mating, when they retrieve food and as they defend their territory against rivals. Scientists have studied the songs of only about 50 bat species so far, Knörnschild says, and they know much less about bat communication than about birds’. Four species of bats have so far been found to learn songs from each other, their fathers and other adult males, just as a child gradually learns how to speak from its parents1. © 2016 Macmillan Publishers Limited,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 22888 - Posted: 11.19.2016

By ARNAUD COLINART, AMAURY LA BURTHE, PETER MIDDLETON and JAMES SPINNEY “What is the world of sound?” So begins a diary entry from April 1984, recorded on audiocassette, about the nature of acoustic experience. The voice on the tape is that of the writer and theologian John Hull, who at the time of the recording had been totally blind for almost two years. After losing his sight in his mid-40s, Dr. Hull, a newlywed with a young family, had decided that blindness would destroy him if he didn’t learn to understand it. For three years he recorded his experiences of sight loss, documenting “a world beyond sight.” We first met Dr. Hull in 2011, having read his acclaimed 1991 book “Touching The Rock: An Experience of Blindness,” which was transcribed from his audio diaries. We began collaborating with him on a series of films using his original recordings. These included an Emmy-winning Op-Doc in 2014 and culminated in the feature-length documentary “Notes on Blindness.” But we were also interested in how interactive forms of storytelling might further explore Dr. Hull’s vast and detailed account — in particular how new mediums like virtual reality could illuminate his investigations into auditory experience. The diaries describe his evolving appreciation of “the breadth and depth of three-dimensional world that is revealed by sound,” the awakening of an acoustic perception of space. The sound of falling rain, he said, “brings out the contours of what is around you”; wind brings leaves and trees to life; thunder “puts a roof over your head.” This interactive experience is narrated by Dr. Hull, using extracts from his diary recordings to consider the nature of acoustic space. Binaural techniques map the myriad details of everyday life (in this case, the noises that surround Dr. Hull in a park) within a 3-D sound environment, a “panorama of music and information,” rich in color and texture. The real-time animation visualizes this multilayered soundscape in which, Dr. Hull says, “every sound is a point of activity.” © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 22880 - Posted: 11.17.2016

By Rachel Feltman and Sarah Kaplan Dear Science, I just got a new iPhone and can't decide what kind of headphones I should be using. I read somewhere that ear buds are worse for you than headphones that fit over your ear. Is that true? I don't want to damage my hearing by using the wrong thing. Here's what science has to say: At the end of the day, nothing really matters but volume. No pair of headphones is inherently “good” or “bad” for your hearing. But picking the right headphones can help you listen to your music more responsibly. The louder a sound is, the more quickly it can cause injury to your ears. If you're not careful, a powerful sound wave can actually tear right through your delicate eardrum, but that's unlikely to happen while blasting music. Most hearing loss is the result of nerve damage, and your smartphone is more than capable of wrecking your ears that way. You can be exposed to 85 decibels — the noise of busy city traffic — pretty much all day without causing nerve damage, but things quickly become dangerous once you get louder than that. At 115 decibels, which is about the noise level produced at a rock concert or by a chain saw, nerve damage can happen in less than a minute. You might not immediately notice significant hearing loss as the result of that nerve damage, but it will add up over time. Some smartphones can crank music to 120 decibels. If you listened to an entire album at that volume, you might have noticeable hearing loss by the time you took off your headphones. According to the World Health Organization, 1.1 billion teens and young adults globally are at risk of developing hearing loss because of these “personal audio devices.” You already know the solution, folks: Turn that music down. © 1996-2016 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22867 - Posted: 11.15.2016

by Helen Thompson Narwhals use highly targeted beams of sound to scan their environment for threats and food. In fact, the so-called unicorns of the sea (for their iconic head tusks) may produce the most refined sonar of any living animal. A team of researchers set up 16 underwater microphones to eavesdrop on narwhal click vocalizations at 11 ice pack sites in Greenland’s Baffin Bay in 2013. The recordings show that narwhal clicks are extremely intense and directional — meaning they can widen and narrow the beam of sound to find prey over long and short distances. It’s the most directional sonar signal measured in a living species, the researchers report November 9 in PLOS ONE. The sound beams are also asymmetrically narrow on top. That minimizes clutter from echoes bouncing off the sea surface or ice pack. Finally, narwhals scan vertically as they dive, which could help them find patches of open water where they can surface and breathe amid sea ice cover. All this means that narwhals employ pretty sophisticated sonar. The audio data could help researchers tell the difference between narwhal vocalizations and those of neighboring beluga whales. It also provides a baseline for assessing the potential impact of noise pollution from increases in shipping traffic made possible by sea ice loss. |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22856 - Posted: 11.12.2016

By Jessica Boddy You’d probably never notice a jumping spider across your living room, but it would surely notice you. The arachnids are known for their brilliant eyesight, and a new study shows they have even greater sensory prowess than we thought: Jumping spiders can hear sounds even though they don’t have ears—or even eardrums. To find this out, researchers implanted tiny electrodes in a region of spiders’ brains that would show whether sound was being processed. Then they placed the spiders on a specially designed box to eliminate any vibrations from below—most spiders sense their surroundings through vibrations—and scared the heck out of them with a speaker-produced buzz of one of their predators, the mud dauber wasp. An out-of-earshot, high-frequency buzz and a silent control elicited no response from the spiders. But the 80-hertz wasp buzz made them freeze and look around, startled, just as they would do in the wild. What’s more, data from the electrodes showed a spike in brain activity with each buzz, revealing that spiders actually hear sounds, from a swooping mud dauber wasp to you crunching potato chips on your couch. The researchers, who publish their work today in Current Biology, say further study is needed to see exactly how spiders receive sounds without eardrums, but they believe sensitive hairs on their legs play a part. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22755 - Posted: 10.15.2016

Dean Burnett You remember that time a children’s TV presenter, one who has been working in children’s television for decades and is now employed on a channel aimed at under-8-year-olds, decided to risk it all and say one of the worst possible swear words on a show for pre-schoolers that he is famous for co-hosting? Remember how he took a huge risk for no appreciable gain and uttered a context-free profanity to an audience of toddlers? How he must have wanted to swear on children’s TV but paradoxically didn’t want anyone to notice so “snuck it in” as part of a song, where it would be more ambiguous? How all the editors and regulators at the BBC happened to completely miss it and allow it to be aired? Remember this happening? Well you shouldn’t, because it clearly didn’t. No presenter and/or channel would risk their whole livelihood in such a pointless, meaningless way, especially not the ever-pressured BBC. And, yet, an alarming number of people do think it happened. Apparently, there have been some “outraged parents” who are aghast at the whole thing. This seems reasonable in some respects; if your toddler was subjected to extreme cursing then as a parent you probably would object. On the other hand, if your very small child is able to recognise strong expletives, then perhaps misheard lyrics on cheerful TV shows aren’t the most pressing issue in their life. Regardless, a surprising number of people report that they did genuinely “hear” the c-word. This is less likely to be due to a TV presenter having some sort of extremely-fleeting breakdown, and more likely due to the quirks and questionable processing of our senses by our powerful yet imperfect brains. © 2016 Guardian News and Media Limited

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Consciousness
Link ID: 22662 - Posted: 09.17.2016

By Marlene Cimons Former president Jimmy Carter, 91, told the New Yorker recently that 90 percent of the arguments he has with Rosalynn, his wife of 70 years, are about hearing. “When I tell her, ‘Please speak more loudly,’ she absolutely refuses to speak more loudly, or to look at me when she talks,” he told the magazine. In response, the former first lady, 88, declared that having to repeat things “drives me up the wall.” Yet after both went to the doctor, much to her surprise, “I found out it was me!” she said. “I was the one who was deaf.” Hearing loss is like that. It comes on gradually, often without an individual’s realizing it, and it prompts a range of social and health consequences. “You don’t just wake up with a sudden hearing loss,” says Barbara Kelley, executive director of the Hearing Loss Association of America. “It can be insidious. It can creep up on you. You start coping, or your spouse starts doing things for you, like making telephone calls.” An estimated 25 percent of Americans between ages 60 and 69 have some degree of hearing loss, according to the President’s Council of Advisors on Science and Technology. That percentage grows to more than 50 percent for those age 70 to 79, and to almost 80 percent of individuals older than 80. That’s about 30 million people, a number likely to increase as our population ages. Behind these statistics are disturbing repercussions such as social isolation and the inability to work, travel or be physically active.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22561 - Posted: 08.16.2016

Cassie Martin Understanding sea anemones’ exceptional healing abilities may help scientists figure out how to restore hearing. Proteins that the marine invertebrates use to repair damaged cells can also repair mice’s sound-sensing cells, a new study shows. The findings provide insights into the mechanics of hearing and could lead to future treatments for traumatic hearing loss, researchers report in the Aug. 1 Journal of Experimental Biology. “This is a preliminary step, but it’s a very useful step in looking at restoring the structure and function of these damaged cells,” says Lavinia Sheets, a hearing researcher at Harvard Medical School who was not involved in the study. Tentacles of starlet sea anemones (Nematostella vectensis) are covered in tiny hairlike cells that sense vibrations in the water from prey swimming nearby.The cells are similar to sound-sensing cells found in the ears of humans and other mammals. When loud noises damage or kill these hair cells, the result can range from temporary to permanent hearing loss. Anemones’ repair proteins restore their damaged hairlike cells, but landlubbing creatures aren’t as lucky. Glen Watson, a biologist at the University of Louisiana at Lafayette, wondered if anemones’ proteins — which have previously been shown to mend similar cells in blind cave fish — might also work in mammals. |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 22550 - Posted: 08.12.2016

Helen Thompson A roughly 27-million-year-old fossilized skull echoes growing evidence that ancient whales could navigate using high-frequency sound. Discovered over a decade ago in a drainage ditch by an amateur fossil hunter on the South Carolina coast, the skull belongs to an early toothed whale. The fossil is so well-preserved that it includes rare inner ear bones similar to those found in modern whales and dolphins. Inspired by the Latin for “echo hunter,” scientists have now named the ancient whale Echovenator sandersi. “It suggests that the earliest toothed whales could hear high-frequency sounds,” which is essential for echolocation, says Morgan Churchill, an anatomist at the New York Institute of Technology in Old Westbury. Churchill and his colleagues describe the specimen online August 4 in Current Biology. Modern whales are divided on the sound spectrum. Toothed whales, such as orcas and porpoises, use high-frequency clicking sounds to sense predators and prey. Filter-feeding baleen whales, on the other hand, use low-frequency sound for long-distance communication. Around 35 million years ago, the two groups split, and E. sandersi emerged soon after. CT scans show that E. sandersi had a few features indicative of ultrasonic hearing in modern whales and dolphins. Most importantly, it had a spiraling inner ear bone with wide curves and a long bony support structure, both of which allow a greater sensitivity to higher-frequency sound. A small nerve canal probably transmitted sound signals to the brain. © Society for Science & the Public 2000 - 2016. All rights reserved.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22523 - Posted: 08.06.2016

By Maggie Koerth-Baker Q: I want to hear what the loudest thing in the world is! — Kara Jo, age 5 No. No, you really don’t. See, there’s this thing about sound that even we grown-ups tend to forget — it’s not some glitter rainbow floating around with no connection to the physical world. Sound is mechanical. A sound is a shove — just a little one, a tap on the tightly stretched membrane of your ear drum. The louder the sound, the heavier the knock. If a sound is loud enough, it can rip a hole in your ear drum. If a sound is loud enough, it can plow into you like a linebacker and knock you flat on your butt. When the shock wave from a bomb levels a house, that’s sound tearing apart bricks and splintering glass. Sound can kill you. Consider this piece of history: On the morning of Aug. 27, 1883, ranchers on a sheep camp outside Alice Springs, Australia, heard a sound like two shots from a rifle. At that very moment, the Indonesian volcanic island of Krakatoa was blowing itself to bits 2,233 miles away. Scientists think this is probably the loudest sound humans have ever accurately measured. Not only are there records of people hearing the sound of Krakatoa thousands of miles away, there is also physical evidence that the sound of the volcano’s explosion traveled all the way around the globe multiple times. Now, nobody heard Krakatoa in England or Toronto. There wasn’t a “boom” audible in St. Petersburg. Instead, what those places recorded were spikes in atmospheric pressure — the very air tensing up and then releasing with a sigh, as the waves of sound from Krakatoa passed through. There are two important lessons about sound in there: One, you don’t have to be able to see the loudest thing in the world in order to hear it. Second, just because you can’t hear a sound doesn’t mean it isn’t there. Sound is powerful and pervasive and it surrounds us all the time, whether we’re aware of it or not.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22453 - Posted: 07.19.2016

Paula Span An estimated one zillion older people have a problem like mine. First: We notice age-related hearing loss. A much-anticipated report on hearing health from the National Academies of Sciences, Engineering and Medicine last month put the prevalence at more than 45 percent of those aged 70 to 74, and more than 80 percent among those over 85. Then: We do little or nothing about it. Fewer than 20 percent of those with hearing loss use hearing aids. I’ve written before about the reasons. High prices ($2,500 and up for a decent hearing aid, and most people need two). Lack of Medicare reimbursement, because the original 1965 law creating Medicare prohibits coverage. Time and hassle. Stigma. Both the National Academies and the influential President’s Council of Advisors on Science and Technology have proposed pragmatic steps to make hearing technology more accessible and affordable. But until there’s progress on those, many of us with mild to moderate hearing loss may consider a relatively inexpensive alternative: personal sound amplification products, or P.S.A.P.s. They offer some promise — and some perils, too. Unlike for a hearing aid, you don’t need an audiologist to obtain a P.S.A.P. You see these gizmos advertised on the back pages of magazines or on sale at drugstore chains. You can buy them online. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22449 - Posted: 07.16.2016

Ramin Skibba Is Justin Bieber a musical genius or a talentless hack? What you 'belieb' depends on your cultural experiences. Some people like to listen to the Beatles, while others prefer Gregorian chants. When it comes to music, scientists find that nurture can trump nature. Musical preferences seem to be mainly shaped by a person’s cultural upbringing and experiences rather than biological factors, according to a study published on 13 July in Nature1. “Our results show that there is a profound cultural difference” in the way people respond to consonant and dissonant sounds, says Josh McDermott, a cognitive scientist at the Massachusetts Institute of Technology in Cambridge and lead author of the paper. This suggests that other cultures hear the world differently, he adds. The study is one of the first to put an age-old argument to the test. Some scientists believe that the way people respond to music has a biological basis, because pitches that people often like have particular interval ratios. They argue that this would trump any cultural shaping of musical preferences, effectively making them a universal phenomenon. Ethnomusicologists and music composers, by contrast, think that such preferences are more a product of one’s culture. If a person’s upbringing shapes their preferences, then they are not a universal phenomenon. © 2016 Macmillan Publishers Limited

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 22439 - Posted: 07.14.2016

By Michael Price The blind comic book star Daredevil has a highly developed sense of hearing that allows him to “see” his environment with his ears. But you don’t need to be a superhero to pull a similar stunt, according to a new study. Researchers have identified the neural architecture used by the brain to turn subtle sounds into a mind’s-eye map of your surroundings. The study appears to be “very solid work,” says Lore Thaler, a psychologist at Durham University in the United Kingdom who studies echolocation, the ability of bats and other animals to use sound to locate objects. Everyone has an instinctive sense of the world around them—even if they can’t always see it, says Santani Teng, a postdoctoral researcher at the Massachusetts Institute of Technology (MIT) in Cambridge who studies auditory perception in both blind and sighted people. “We all kind of have that intuition,” says Teng over the phone. “For instance, you can tell I’m not in a gymnasium right now. I’m in a smaller space, like an office.” That office belongs to Aude Oliva, principal research scientist for MIT’s Computational Perception & Cognition laboratory. She and Teng, along with two other colleagues, wanted to quantify how well people can use sounds to judge the size of the room around them, and whether that ability could be detected in the brain. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22427 - Posted: 07.12.2016

By Brian Platzer It started in 2010 when I smoked pot for the first time since college. It was cheap, gristly weed I’d had in my freezer for nearly six years, but four hours after taking one hit I was still so dizzy I couldn’t stand up without holding on to the furniture. The next day I was still dizzy, and the next, and the next, but it tapered off gradually until about a month later I was mostly fine. Over the following year I got married, started teaching seventh and eighth grade, and began work on a novel. Every week or so the disequilibrium sneaked up on me. The feeling was one of disorientation as much as dizziness, with some cloudy vision, light nausea and the sensation of being overwhelmed by my surroundings. During one eighth-grade English class, when I turned around to write on the blackboard, I stumbled and couldn’t stabilize myself. I fell in front of my students and was too disoriented to stand. My students stared at me slumped on the floor until I mustered enough focus to climb up to a chair and did my best to laugh it off. I was only 29, but my father had had a benign brain tumor around the same age, so I had a brain scan. My brain appeared to be fine. A neurologist recommended I see an ear, nose and throat specialist. A technician flooded my ear canal with water to see if my acoustic nerve reacted properly. The doctor suspected either benign positional vertigo (dizziness caused by a small piece of bonelike calcium stuck in the inner ear) or Ménière’s disease (which leads to dizziness from pressure). Unfortunately, the test showed my inner ear was most likely fine. But just as the marijuana had triggered the dizziness the year before, the test itself catalyzed the dizziness now. In spite of the negative results, doctors still believed I had an inner ear problem. They prescribed exercises to unblock crystals, and salt pills and then prednisone to fight Ménière’s disease. All this took months, and I continued to be dizzy, all day, every day. It felt as though I woke up every morning having already drunk a dozen beers — some days, depending on how active and stressful my day was, it felt like much more. Most days ended with me in tears. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 22318 - Posted: 06.14.2016

Meghan Rosen SALT LAKE CITY — In the Indian Ocean off the coast of Sri Lanka, pygmy blue whales are changing their tune — and they might be doing it on purpose. From 2002 to 2012, the frequency of one part of the whales’ calls steadily fell, marine bioacoustician Jennifer Miksis-Olds reported May 25 at a meeting of the Acoustical Society of America. But unexpectedly, another part of the whales’ call stayed the same, she found. “I’ve never seen results like this before,” says marine bioacoustician Leanna Matthews of Syracuse University in New York, who was not involved with the work. Miksis-Olds’ findings add a new twist to current theories about blue whale vocalizations and spark all sorts of questions about what the animals are doing, Matthews said. “It’s a huge mystery.” Over the last 40 to 50 years, the calls of blue whales around the world have been getting deeper. Researchers have reported frequency drops in blue whale populations from the Arctic Ocean to the North Pacific. Some researchers think that blue whales are just getting bigger, said Miksis-Olds, of the University of New Hampshire in Durham. Whaling isn’t as common as it used to be, so whales have been able to grow larger — and larger whales have deeper calls. Another theory blames whales’ changing calls on an increasingly noisy ocean. Whales could be automatically adjusting their calls to be heard better, kind of like a person raising their voice to speak at a party, she said. If the whales were just getting bigger, you’d expect all components of the calls to be deeper, said acoustics researcher Pasquale Bottalico at Michigan State University in East Lansing. But the new data don’t support that, he said. © Society for Science & the Public 2000 - 2016. A

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22280 - Posted: 06.04.2016

Amy McDermott Giant pandas have better ears than people — and polar bears. Pandas can hear surprisingly high frequencies, conservation biologist Megan Owen of the San Diego Zoo and colleagues report in the April Global Ecology and Conservation. The scientists played a range of tones for five zoo pandas trained to nose a target in response to sound. Training, which took three to six months for each animal, demanded serious focus and patience, says Owen, who called the effort “a lot to ask of a bear.” Both males and females heard into the range of a “silent” ultrasonic dog whistle. Polar bears, the only other bears scientists have tested, are less sensitive to sounds at or above 14 kilohertz. Researchers still don’t know why pandas have ultrasonic hearing. The bears are a vocal bunch, but their chirps and other calls have never been recorded at ultrasonic levels, Owen says. Great hearing may be a holdover from the bears’ ancient past. Citations M.A. Owen et al. Hearing sensitivity in context: Conservation implications for a highly vocal endangered species. Global Ecology and Conservation. Vol. 6, April 2016, p. 121. doi: 10.1016/j.gecco.2016.02.007. © Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22269 - Posted: 06.01.2016

by Helen Thompson In hunting down delicious fish, Flipper may have a secret weapon: snot. Dolphins emit a series of quick, high-frequency sounds — probably by forcing air over tissues in the nasal passage — to find and track potential prey. “It’s kind of like making a raspberry,” says Aaron Thode of the Scripps Institution of Oceanography in San Diego. Thode and colleagues tweaked a human speech modeling technique to reproduce dolphin sounds and discern the intricacies of their unique style of sound production. He presented the results on May 24 in Salt Lake City at the annual meeting of the Acoustical Society of America. Dolphin chirps have two parts: a thump and a ring. Their model worked on the assumption that lumps of tissue bumping together produce the thump, and those tissues pulling apart produce the ring. But to match the high frequencies of live bottlenose dolphins, the researchers had to make the surfaces of those tissues sticky. That suggests that mucus lining the nasal passage tissue is crucial to dolphin sonar. The vocal model also successfully mimicked whistling noises used to communicate with other dolphins and faulty clicks that probably result from inadequate snot. Such techniques could be adapted to study sound production or echolocation in sperm whales and other dolphin relatives. © Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22244 - Posted: 05.25.2016