Links for Keyword: Hearing

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 141 - 160 of 735

Ramin Skibba Bats sing just as birds and humans do. But how they learn their melodies is a mystery — one that scientists will try to solve by sequencing the genomes of more than 1,000 bat species. The project, called Bat 1K, was announced on 14 November at the annual meeting of the Society for Neuroscience in San Diego, California. Its organizers also hope to learn more about the flying mammals’ ability to navigate in the dark through echolocation, their strong immune systems that can shrug off Ebola and their relatively long lifespans. “The genomes of all these other species, like birds and mice, are well-understood,” says Sonja Vernes, a neurogeneticist at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands, and co-director of the project. “But we don’t know anything about bat genes yet.” Some bats show babbling behaviour, including barks, chatter, screeches, whistles and trills, says Mirjam Knörnschild, a behavioural ecologist at Free University Berlin, Germany. Young bats learn the songs and sounds from older male tutors. They use these sounds during courtship and mating, when they retrieve food and as they defend their territory against rivals. Scientists have studied the songs of only about 50 bat species so far, Knörnschild says, and they know much less about bat communication than about birds’. Four species of bats have so far been found to learn songs from each other, their fathers and other adult males, just as a child gradually learns how to speak from its parents1. © 2016 Macmillan Publishers Limited,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 22888 - Posted: 11.19.2016

By ARNAUD COLINART, AMAURY LA BURTHE, PETER MIDDLETON and JAMES SPINNEY “What is the world of sound?” So begins a diary entry from April 1984, recorded on audiocassette, about the nature of acoustic experience. The voice on the tape is that of the writer and theologian John Hull, who at the time of the recording had been totally blind for almost two years. After losing his sight in his mid-40s, Dr. Hull, a newlywed with a young family, had decided that blindness would destroy him if he didn’t learn to understand it. For three years he recorded his experiences of sight loss, documenting “a world beyond sight.” We first met Dr. Hull in 2011, having read his acclaimed 1991 book “Touching The Rock: An Experience of Blindness,” which was transcribed from his audio diaries. We began collaborating with him on a series of films using his original recordings. These included an Emmy-winning Op-Doc in 2014 and culminated in the feature-length documentary “Notes on Blindness.” But we were also interested in how interactive forms of storytelling might further explore Dr. Hull’s vast and detailed account — in particular how new mediums like virtual reality could illuminate his investigations into auditory experience. The diaries describe his evolving appreciation of “the breadth and depth of three-dimensional world that is revealed by sound,” the awakening of an acoustic perception of space. The sound of falling rain, he said, “brings out the contours of what is around you”; wind brings leaves and trees to life; thunder “puts a roof over your head.” This interactive experience is narrated by Dr. Hull, using extracts from his diary recordings to consider the nature of acoustic space. Binaural techniques map the myriad details of everyday life (in this case, the noises that surround Dr. Hull in a park) within a 3-D sound environment, a “panorama of music and information,” rich in color and texture. The real-time animation visualizes this multilayered soundscape in which, Dr. Hull says, “every sound is a point of activity.” © 2016 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 22880 - Posted: 11.17.2016

By Rachel Feltman and Sarah Kaplan Dear Science, I just got a new iPhone and can't decide what kind of headphones I should be using. I read somewhere that ear buds are worse for you than headphones that fit over your ear. Is that true? I don't want to damage my hearing by using the wrong thing. Here's what science has to say: At the end of the day, nothing really matters but volume. No pair of headphones is inherently “good” or “bad” for your hearing. But picking the right headphones can help you listen to your music more responsibly. The louder a sound is, the more quickly it can cause injury to your ears. If you're not careful, a powerful sound wave can actually tear right through your delicate eardrum, but that's unlikely to happen while blasting music. Most hearing loss is the result of nerve damage, and your smartphone is more than capable of wrecking your ears that way. You can be exposed to 85 decibels — the noise of busy city traffic — pretty much all day without causing nerve damage, but things quickly become dangerous once you get louder than that. At 115 decibels, which is about the noise level produced at a rock concert or by a chain saw, nerve damage can happen in less than a minute. You might not immediately notice significant hearing loss as the result of that nerve damage, but it will add up over time. Some smartphones can crank music to 120 decibels. If you listened to an entire album at that volume, you might have noticeable hearing loss by the time you took off your headphones. According to the World Health Organization, 1.1 billion teens and young adults globally are at risk of developing hearing loss because of these “personal audio devices.” You already know the solution, folks: Turn that music down. © 1996-2016 The Washington Post

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22867 - Posted: 11.15.2016

by Helen Thompson Narwhals use highly targeted beams of sound to scan their environment for threats and food. In fact, the so-called unicorns of the sea (for their iconic head tusks) may produce the most refined sonar of any living animal. A team of researchers set up 16 underwater microphones to eavesdrop on narwhal click vocalizations at 11 ice pack sites in Greenland’s Baffin Bay in 2013. The recordings show that narwhal clicks are extremely intense and directional — meaning they can widen and narrow the beam of sound to find prey over long and short distances. It’s the most directional sonar signal measured in a living species, the researchers report November 9 in PLOS ONE. The sound beams are also asymmetrically narrow on top. That minimizes clutter from echoes bouncing off the sea surface or ice pack. Finally, narwhals scan vertically as they dive, which could help them find patches of open water where they can surface and breathe amid sea ice cover. All this means that narwhals employ pretty sophisticated sonar. The audio data could help researchers tell the difference between narwhal vocalizations and those of neighboring beluga whales. It also provides a baseline for assessing the potential impact of noise pollution from increases in shipping traffic made possible by sea ice loss. |© Society for Science & the Public 2000 - 2016.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22856 - Posted: 11.12.2016

By Jessica Boddy You’d probably never notice a jumping spider across your living room, but it would surely notice you. The arachnids are known for their brilliant eyesight, and a new study shows they have even greater sensory prowess than we thought: Jumping spiders can hear sounds even though they don’t have ears—or even eardrums. To find this out, researchers implanted tiny electrodes in a region of spiders’ brains that would show whether sound was being processed. Then they placed the spiders on a specially designed box to eliminate any vibrations from below—most spiders sense their surroundings through vibrations—and scared the heck out of them with a speaker-produced buzz of one of their predators, the mud dauber wasp. An out-of-earshot, high-frequency buzz and a silent control elicited no response from the spiders. But the 80-hertz wasp buzz made them freeze and look around, startled, just as they would do in the wild. What’s more, data from the electrodes showed a spike in brain activity with each buzz, revealing that spiders actually hear sounds, from a swooping mud dauber wasp to you crunching potato chips on your couch. The researchers, who publish their work today in Current Biology, say further study is needed to see exactly how spiders receive sounds without eardrums, but they believe sensitive hairs on their legs play a part. © 2016 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22755 - Posted: 10.15.2016

Dean Burnett You remember that time a children’s TV presenter, one who has been working in children’s television for decades and is now employed on a channel aimed at under-8-year-olds, decided to risk it all and say one of the worst possible swear words on a show for pre-schoolers that he is famous for co-hosting? Remember how he took a huge risk for no appreciable gain and uttered a context-free profanity to an audience of toddlers? How he must have wanted to swear on children’s TV but paradoxically didn’t want anyone to notice so “snuck it in” as part of a song, where it would be more ambiguous? How all the editors and regulators at the BBC happened to completely miss it and allow it to be aired? Remember this happening? Well you shouldn’t, because it clearly didn’t. No presenter and/or channel would risk their whole livelihood in such a pointless, meaningless way, especially not the ever-pressured BBC. And, yet, an alarming number of people do think it happened. Apparently, there have been some “outraged parents” who are aghast at the whole thing. This seems reasonable in some respects; if your toddler was subjected to extreme cursing then as a parent you probably would object. On the other hand, if your very small child is able to recognise strong expletives, then perhaps misheard lyrics on cheerful TV shows aren’t the most pressing issue in their life. Regardless, a surprising number of people report that they did genuinely “hear” the c-word. This is less likely to be due to a TV presenter having some sort of extremely-fleeting breakdown, and more likely due to the quirks and questionable processing of our senses by our powerful yet imperfect brains. © 2016 Guardian News and Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 22662 - Posted: 09.17.2016

By Marlene Cimons Former president Jimmy Carter, 91, told the New Yorker recently that 90 percent of the arguments he has with Rosalynn, his wife of 70 years, are about hearing. “When I tell her, ‘Please speak more loudly,’ she absolutely refuses to speak more loudly, or to look at me when she talks,” he told the magazine. In response, the former first lady, 88, declared that having to repeat things “drives me up the wall.” Yet after both went to the doctor, much to her surprise, “I found out it was me!” she said. “I was the one who was deaf.” Hearing loss is like that. It comes on gradually, often without an individual’s realizing it, and it prompts a range of social and health consequences. “You don’t just wake up with a sudden hearing loss,” says Barbara Kelley, executive director of the Hearing Loss Association of America. “It can be insidious. It can creep up on you. You start coping, or your spouse starts doing things for you, like making telephone calls.” An estimated 25 percent of Americans between ages 60 and 69 have some degree of hearing loss, according to the President’s Council of Advisors on Science and Technology. That percentage grows to more than 50 percent for those age 70 to 79, and to almost 80 percent of individuals older than 80. That’s about 30 million people, a number likely to increase as our population ages. Behind these statistics are disturbing repercussions such as social isolation and the inability to work, travel or be physically active.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22561 - Posted: 08.16.2016

Cassie Martin Understanding sea anemones’ exceptional healing abilities may help scientists figure out how to restore hearing. Proteins that the marine invertebrates use to repair damaged cells can also repair mice’s sound-sensing cells, a new study shows. The findings provide insights into the mechanics of hearing and could lead to future treatments for traumatic hearing loss, researchers report in the Aug. 1 Journal of Experimental Biology. “This is a preliminary step, but it’s a very useful step in looking at restoring the structure and function of these damaged cells,” says Lavinia Sheets, a hearing researcher at Harvard Medical School who was not involved in the study. Tentacles of starlet sea anemones (Nematostella vectensis) are covered in tiny hairlike cells that sense vibrations in the water from prey swimming nearby.The cells are similar to sound-sensing cells found in the ears of humans and other mammals. When loud noises damage or kill these hair cells, the result can range from temporary to permanent hearing loss. Anemones’ repair proteins restore their damaged hairlike cells, but landlubbing creatures aren’t as lucky. Glen Watson, a biologist at the University of Louisiana at Lafayette, wondered if anemones’ proteins — which have previously been shown to mend similar cells in blind cave fish — might also work in mammals. |© Society for Science & the Public 2000 - 2016.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 22550 - Posted: 08.12.2016

Helen Thompson A roughly 27-million-year-old fossilized skull echoes growing evidence that ancient whales could navigate using high-frequency sound. Discovered over a decade ago in a drainage ditch by an amateur fossil hunter on the South Carolina coast, the skull belongs to an early toothed whale. The fossil is so well-preserved that it includes rare inner ear bones similar to those found in modern whales and dolphins. Inspired by the Latin for “echo hunter,” scientists have now named the ancient whale Echovenator sandersi. “It suggests that the earliest toothed whales could hear high-frequency sounds,” which is essential for echolocation, says Morgan Churchill, an anatomist at the New York Institute of Technology in Old Westbury. Churchill and his colleagues describe the specimen online August 4 in Current Biology. Modern whales are divided on the sound spectrum. Toothed whales, such as orcas and porpoises, use high-frequency clicking sounds to sense predators and prey. Filter-feeding baleen whales, on the other hand, use low-frequency sound for long-distance communication. Around 35 million years ago, the two groups split, and E. sandersi emerged soon after. CT scans show that E. sandersi had a few features indicative of ultrasonic hearing in modern whales and dolphins. Most importantly, it had a spiraling inner ear bone with wide curves and a long bony support structure, both of which allow a greater sensitivity to higher-frequency sound. A small nerve canal probably transmitted sound signals to the brain. © Society for Science & the Public 2000 - 2016. All rights reserved.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22523 - Posted: 08.06.2016

By Maggie Koerth-Baker Q: I want to hear what the loudest thing in the world is! — Kara Jo, age 5 No. No, you really don’t. See, there’s this thing about sound that even we grown-ups tend to forget — it’s not some glitter rainbow floating around with no connection to the physical world. Sound is mechanical. A sound is a shove — just a little one, a tap on the tightly stretched membrane of your ear drum. The louder the sound, the heavier the knock. If a sound is loud enough, it can rip a hole in your ear drum. If a sound is loud enough, it can plow into you like a linebacker and knock you flat on your butt. When the shock wave from a bomb levels a house, that’s sound tearing apart bricks and splintering glass. Sound can kill you. Consider this piece of history: On the morning of Aug. 27, 1883, ranchers on a sheep camp outside Alice Springs, Australia, heard a sound like two shots from a rifle. At that very moment, the Indonesian volcanic island of Krakatoa was blowing itself to bits 2,233 miles away. Scientists think this is probably the loudest sound humans have ever accurately measured. Not only are there records of people hearing the sound of Krakatoa thousands of miles away, there is also physical evidence that the sound of the volcano’s explosion traveled all the way around the globe multiple times. Now, nobody heard Krakatoa in England or Toronto. There wasn’t a “boom” audible in St. Petersburg. Instead, what those places recorded were spikes in atmospheric pressure — the very air tensing up and then releasing with a sigh, as the waves of sound from Krakatoa passed through. There are two important lessons about sound in there: One, you don’t have to be able to see the loudest thing in the world in order to hear it. Second, just because you can’t hear a sound doesn’t mean it isn’t there. Sound is powerful and pervasive and it surrounds us all the time, whether we’re aware of it or not.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22453 - Posted: 07.19.2016

Paula Span An estimated one zillion older people have a problem like mine. First: We notice age-related hearing loss. A much-anticipated report on hearing health from the National Academies of Sciences, Engineering and Medicine last month put the prevalence at more than 45 percent of those aged 70 to 74, and more than 80 percent among those over 85. Then: We do little or nothing about it. Fewer than 20 percent of those with hearing loss use hearing aids. I’ve written before about the reasons. High prices ($2,500 and up for a decent hearing aid, and most people need two). Lack of Medicare reimbursement, because the original 1965 law creating Medicare prohibits coverage. Time and hassle. Stigma. Both the National Academies and the influential President’s Council of Advisors on Science and Technology have proposed pragmatic steps to make hearing technology more accessible and affordable. But until there’s progress on those, many of us with mild to moderate hearing loss may consider a relatively inexpensive alternative: personal sound amplification products, or P.S.A.P.s. They offer some promise — and some perils, too. Unlike for a hearing aid, you don’t need an audiologist to obtain a P.S.A.P. You see these gizmos advertised on the back pages of magazines or on sale at drugstore chains. You can buy them online. © 2016 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22449 - Posted: 07.16.2016

Ramin Skibba Is Justin Bieber a musical genius or a talentless hack? What you 'belieb' depends on your cultural experiences. Some people like to listen to the Beatles, while others prefer Gregorian chants. When it comes to music, scientists find that nurture can trump nature. Musical preferences seem to be mainly shaped by a person’s cultural upbringing and experiences rather than biological factors, according to a study published on 13 July in Nature1. “Our results show that there is a profound cultural difference” in the way people respond to consonant and dissonant sounds, says Josh McDermott, a cognitive scientist at the Massachusetts Institute of Technology in Cambridge and lead author of the paper. This suggests that other cultures hear the world differently, he adds. The study is one of the first to put an age-old argument to the test. Some scientists believe that the way people respond to music has a biological basis, because pitches that people often like have particular interval ratios. They argue that this would trump any cultural shaping of musical preferences, effectively making them a universal phenomenon. Ethnomusicologists and music composers, by contrast, think that such preferences are more a product of one’s culture. If a person’s upbringing shapes their preferences, then they are not a universal phenomenon. © 2016 Macmillan Publishers Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 22439 - Posted: 07.14.2016

By Michael Price The blind comic book star Daredevil has a highly developed sense of hearing that allows him to “see” his environment with his ears. But you don’t need to be a superhero to pull a similar stunt, according to a new study. Researchers have identified the neural architecture used by the brain to turn subtle sounds into a mind’s-eye map of your surroundings. The study appears to be “very solid work,” says Lore Thaler, a psychologist at Durham University in the United Kingdom who studies echolocation, the ability of bats and other animals to use sound to locate objects. Everyone has an instinctive sense of the world around them—even if they can’t always see it, says Santani Teng, a postdoctoral researcher at the Massachusetts Institute of Technology (MIT) in Cambridge who studies auditory perception in both blind and sighted people. “We all kind of have that intuition,” says Teng over the phone. “For instance, you can tell I’m not in a gymnasium right now. I’m in a smaller space, like an office.” That office belongs to Aude Oliva, principal research scientist for MIT’s Computational Perception & Cognition laboratory. She and Teng, along with two other colleagues, wanted to quantify how well people can use sounds to judge the size of the room around them, and whether that ability could be detected in the brain. © 2016 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22427 - Posted: 07.12.2016

By Brian Platzer It started in 2010 when I smoked pot for the first time since college. It was cheap, gristly weed I’d had in my freezer for nearly six years, but four hours after taking one hit I was still so dizzy I couldn’t stand up without holding on to the furniture. The next day I was still dizzy, and the next, and the next, but it tapered off gradually until about a month later I was mostly fine. Over the following year I got married, started teaching seventh and eighth grade, and began work on a novel. Every week or so the disequilibrium sneaked up on me. The feeling was one of disorientation as much as dizziness, with some cloudy vision, light nausea and the sensation of being overwhelmed by my surroundings. During one eighth-grade English class, when I turned around to write on the blackboard, I stumbled and couldn’t stabilize myself. I fell in front of my students and was too disoriented to stand. My students stared at me slumped on the floor until I mustered enough focus to climb up to a chair and did my best to laugh it off. I was only 29, but my father had had a benign brain tumor around the same age, so I had a brain scan. My brain appeared to be fine. A neurologist recommended I see an ear, nose and throat specialist. A technician flooded my ear canal with water to see if my acoustic nerve reacted properly. The doctor suspected either benign positional vertigo (dizziness caused by a small piece of bonelike calcium stuck in the inner ear) or Ménière’s disease (which leads to dizziness from pressure). Unfortunately, the test showed my inner ear was most likely fine. But just as the marijuana had triggered the dizziness the year before, the test itself catalyzed the dizziness now. In spite of the negative results, doctors still believed I had an inner ear problem. They prescribed exercises to unblock crystals, and salt pills and then prednisone to fight Ménière’s disease. All this took months, and I continued to be dizzy, all day, every day. It felt as though I woke up every morning having already drunk a dozen beers — some days, depending on how active and stressful my day was, it felt like much more. Most days ended with me in tears. © 2016 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 22318 - Posted: 06.14.2016

Meghan Rosen SALT LAKE CITY — In the Indian Ocean off the coast of Sri Lanka, pygmy blue whales are changing their tune — and they might be doing it on purpose. From 2002 to 2012, the frequency of one part of the whales’ calls steadily fell, marine bioacoustician Jennifer Miksis-Olds reported May 25 at a meeting of the Acoustical Society of America. But unexpectedly, another part of the whales’ call stayed the same, she found. “I’ve never seen results like this before,” says marine bioacoustician Leanna Matthews of Syracuse University in New York, who was not involved with the work. Miksis-Olds’ findings add a new twist to current theories about blue whale vocalizations and spark all sorts of questions about what the animals are doing, Matthews said. “It’s a huge mystery.” Over the last 40 to 50 years, the calls of blue whales around the world have been getting deeper. Researchers have reported frequency drops in blue whale populations from the Arctic Ocean to the North Pacific. Some researchers think that blue whales are just getting bigger, said Miksis-Olds, of the University of New Hampshire in Durham. Whaling isn’t as common as it used to be, so whales have been able to grow larger — and larger whales have deeper calls. Another theory blames whales’ changing calls on an increasingly noisy ocean. Whales could be automatically adjusting their calls to be heard better, kind of like a person raising their voice to speak at a party, she said. If the whales were just getting bigger, you’d expect all components of the calls to be deeper, said acoustics researcher Pasquale Bottalico at Michigan State University in East Lansing. But the new data don’t support that, he said. © Society for Science & the Public 2000 - 2016. A

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Lateralization
Link ID: 22280 - Posted: 06.04.2016

Amy McDermott Giant pandas have better ears than people — and polar bears. Pandas can hear surprisingly high frequencies, conservation biologist Megan Owen of the San Diego Zoo and colleagues report in the April Global Ecology and Conservation. The scientists played a range of tones for five zoo pandas trained to nose a target in response to sound. Training, which took three to six months for each animal, demanded serious focus and patience, says Owen, who called the effort “a lot to ask of a bear.” Both males and females heard into the range of a “silent” ultrasonic dog whistle. Polar bears, the only other bears scientists have tested, are less sensitive to sounds at or above 14 kilohertz. Researchers still don’t know why pandas have ultrasonic hearing. The bears are a vocal bunch, but their chirps and other calls have never been recorded at ultrasonic levels, Owen says. Great hearing may be a holdover from the bears’ ancient past. Citations M.A. Owen et al. Hearing sensitivity in context: Conservation implications for a highly vocal endangered species. Global Ecology and Conservation. Vol. 6, April 2016, p. 121. doi: 10.1016/j.gecco.2016.02.007. © Society for Science & the Public 2000 - 2016.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22269 - Posted: 06.01.2016

by Helen Thompson In hunting down delicious fish, Flipper may have a secret weapon: snot. Dolphins emit a series of quick, high-frequency sounds — probably by forcing air over tissues in the nasal passage — to find and track potential prey. “It’s kind of like making a raspberry,” says Aaron Thode of the Scripps Institution of Oceanography in San Diego. Thode and colleagues tweaked a human speech modeling technique to reproduce dolphin sounds and discern the intricacies of their unique style of sound production. He presented the results on May 24 in Salt Lake City at the annual meeting of the Acoustical Society of America. Dolphin chirps have two parts: a thump and a ring. Their model worked on the assumption that lumps of tissue bumping together produce the thump, and those tissues pulling apart produce the ring. But to match the high frequencies of live bottlenose dolphins, the researchers had to make the surfaces of those tissues sticky. That suggests that mucus lining the nasal passage tissue is crucial to dolphin sonar. The vocal model also successfully mimicked whistling noises used to communicate with other dolphins and faulty clicks that probably result from inadequate snot. Such techniques could be adapted to study sound production or echolocation in sperm whales and other dolphin relatives. © Society for Science & the Public 2000 - 2016.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22244 - Posted: 05.25.2016

By Linda Zajac For nearly 65 million years, bats and tiger moths have been locked in an aerial arms race: Bats echolocate to detect and capture tiger moths, and tiger moths evade them with flight maneuvers and their own ultrasonic sounds. Scientists have long wondered why certain species emit these high-frequency clicks that sound like rapid squeaks from a creaky floorboard. Does the sound jam bat sonar or does it warn bats that the moths are toxic? To find out, scientists collected two types of tiger moths: red-headed moths (pictured above) and Martin’s lichen moths. They then removed the soundmaking organs from some of the insects. In a grassy field in Arizona they set up infrared video cameras, ultrasonic microphones, and ultraviolet lights, the last of which they used to attract bats. In darkness, they released one tiger moth at a time and recorded the moth-bat interactions. They found that the moths rarely produced ultrasonic clicks fast enough to jam bat sonar. They also discovered that without sound organs, 64% of the red-headed moths and 94% of the Martin’s lichen moths were captured and spit out. Together, these findings reported late last month in PLOS ONE suggest that instead of jamming sonar like some tiger moths, these species act tough, flexing their soundmaking organs to warn predators of their toxin. © 2016 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22185 - Posted: 05.07.2016

By NATALIE ANGIER Whether to enliven a commute, relax in the evening or drown out the buzz of a neighbor’s recreational drone, Americans listen to music nearly four hours a day. In international surveys, people consistently rank music as one of life’s supreme sources of pleasure and emotional power. We marry to music, graduate to music, mourn to music. Every culture ever studied has been found to make music, and among the oldest artistic objects known are slender flutes carved from mammoth bone some 43,000 years ago — 24,000 years before the cave paintings of Lascaux. Given the antiquity, universality and deep popularity of music, many researchers had long assumed that the human brain must be equipped with some sort of music room, a distinctive piece of cortical architecture dedicated to detecting and interpreting the dulcet signals of song. Yet for years, scientists failed to find any clear evidence of a music-specific domain through conventional brain-scanning technology, and the quest to understand the neural basis of a quintessential human passion foundered. Now researchers at the Massachusetts Institute of Technology have devised a radical new approach to brain imaging that reveals what past studies had missed. By mathematically analyzing scans of the auditory cortex and grouping clusters of brain cells with similar activation patterns, the scientists have identified neural pathways that react almost exclusively to the sound of music — any music. It may be Bach, bluegrass, hip-hop, big band, sitar or Julie Andrews. A listener may relish the sampled genre or revile it. No matter. When a musical passage is played, a distinct set of neurons tucked inside a furrow of a listener’s auditory cortex will fire in response. Other sounds, by contrast — a dog barking, a car skidding, a toilet flushing — leave the musical circuits unmoved. Nancy Kanwisher and Josh H. McDermott, professors of neuroscience at M.I.T., and their postdoctoral colleague Sam Norman-Haignere reported their results in the journal Neuron. The findings offer researchers a new tool for exploring the contours of human musicality. © 2016 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 21873 - Posted: 02.09.2016

By Elizabeth Pennisi PACIFIC GROVE, CALIFORNIA—Bats have an uncanny ability to track and eat insects on the fly with incredible accuracy. But some moths make these agile mammals miss their mark. Tiger moths, for example, emit ultrasonic clicks that jam bat radar. Now, scientists have shown that hawk moths (above) and other species have also evolved this behavior. The nocturnal insects—which are toxic to bats—issue an ultrasonic “warning” whenever a bat is near. After a few nibbles, the bat learns to avoid the noxious species altogether. The researchers shot high-speed videos of bat chases in eight countries over 4 years. Their studies found that moths with an intact sound-producing apparatus—typically located at the tip of the genitals—were spared, whereas those silenced by the researchers were readily caught. As the video shows, when the moths hear the bat’s clicks intensifying as it homes in, they emit their own signal, causing the bat to veer off at the last second. It could be that, like the tiger moths, the hawk moths are jamming the bat’s signal. But, because most moth signals are not the right type to interfere with the bat’s, the researchers say it’s more likely that the bat recognizes the signal and avoids the target on its own. Presenting here last week at a meeting of the American Society of Naturalists, the researchers say this signaling ability has evolved three times in hawk moths and about a dozen more times overall among other moths. © 2016 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21810 - Posted: 01.23.2016