Chapter 9. Hearing, Vestibular Perception, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By GINA KOLATA As concern rises over the effect of continuous use of headphones and earbuds on hearing, a new paper by federal researchers has found something unexpected. The prevalence of hearing loss in Americans of working age has declined. The paper, published on Thursday in the journal JAMA Otolaryngology — Head & Neck Surgery, used data from the National Health and Nutrition Survey, which periodically administers health tests to a representative sample of the population. The investigators, led by Howard J. Hoffman, the director of the epidemiology and statistics program at the National Institute on Deafness and Other Communication Disorders, compared data collected between 1999 and 2004 with data from 2011 and 2012, the most recent available. Hearing loss in this study meant that a person could not hear, in at least one ear, a sound about as loud as rustling leaves. The researchers reported that while 15.9 percent of the population studied in the earlier period had problems hearing, just 14.1 percent of the more recent group had hearing loss. The good news is part of a continuing trend — Americans’ hearing has gotten steadily better since 1959. Most surprising to Mr. Hoffman, a statistician, was that even though the total population of 20- to 69-year-olds grew by 20 million over the time period studied — and the greatest growth was in the oldest people, a group most likely to have hearing problems — the total number of people with hearing loss fell, from 28 million to 27.7 million. Hearing experts who were not associated with the study said they were utterly convinced by the results. “It’s a fantastic paper,” said Brian Fligor, an audiologist with Lantos Technologies of Wakefield, Mass., which develops custom earpieces to protect ears from noise. “I totally believe them.” © 2016 The New York Times Company
Link ID: 22996 - Posted: 12.17.2016
Betsy Mason With virtual reality finally hitting the consumer market this year, VR headsets are bound to make their way onto a lot of holiday shopping lists. But new research suggests these gifts could also give some of their recipients motion sickness — especially if they’re women. In a test of people playing one virtual reality game using an Oculus Rift headset, more than half felt sick within 15 minutes, a team of scientists at the University of Minnesota in Minneapolis reports online December 3 in Experimental Brain Research. Among women, nearly four out of five felt sick. So-called VR sickness, also known as simulator sickness or cybersickness, has been recognized since the 1980s, when the U.S. military noticed that flight simulators were nauseating its pilots. In recent years, anecdotal reports began trickling in about the new generation of head-mounted virtual reality displays making people sick. Now, with VR making its way into people’s homes, there’s a steady stream of claims of VR sickness. “It's a high rate of people that you put in [VR headsets] that are going to experience some level of symptoms,” says Eric Muth, an experimental psychologist at Clemson University in South Carolina with expertise in motion sickness. “It’s going to mute the ‘Wheee!’ factor.” Oculus, which Facebook bought for $2 billion in 2014, released its Rift headset in March. The company declined to comment on the new research but says it has made progress in making the virtual reality experience comfortable for most people, and that developers are getting better at creating VR content. All approved games and apps get a comfort rating based on things like the type of movements involved, and Oculus recommends starting slow and taking breaks. But still some users report getting sick. © Society for Science & the Public 2000 - 2016.
Link ID: 22962 - Posted: 12.07.2016
By CATHERINE SAINT LOUIS These days, even 3-year-olds wear headphones, and as the holidays approach, retailers are well stocked with brands that claim to be “safe for young ears” or to deliver “100 percent safe listening.” The devices limit the volume at which sound can be played; parents rely on them to prevent children from blasting, say, Rihanna at hazardous levels that could lead to hearing loss. But a new analysis by The Wirecutter, a product recommendations website owned by The New York Times, has found that half of 30 sets of children’s headphones tested did not restrict volume to the promised limit. The worst headphones produced sound so loud that it could be hazardous to ears in minutes. “These are terribly important findings,” said Cory Portnuff, a pediatric audiologist at the University of Colorado Hospital, who was not involved in the analysis. “Manufacturers are making claims that aren’t accurate.” The new analysis should be a wake-up call to parents who thought volume-limiting technology offered adequate protection, said Dr. Blake Papsin, the chief otolaryngologist at the Hospital for Sick Children in Toronto. “Headphone manufacturers aren’t interested in the health of your child’s ears,” he said. “They are interested in selling products, and some of them are not good for you.” Half of 8- to 12-year-olds listen to music daily, and nearly two-thirds of teenagers do, according to a 2015 report with more than 2,600 participants. Safe listening is a function of both volume and duration: The louder a sound, the less time you should listen to it. It’s not a linear relationship. Eighty decibels is twice as loud as 70 decibels, and 90 decibels is four times louder. Exposure to 100 decibels, about the volume of noise caused by a power lawn mower, is safe for just 15 minutes; noise at 108 decibels, however, is safe for less than three minutes. © 2016 The New York Times Company
Link ID: 22953 - Posted: 12.06.2016
Emily Conover A bird in laser goggles has helped scientists discover a new phenomenon in the physics of flight. Swirling vortices appear in the flow of air that follows a bird’s wingbeat. But for slowly flying birds, these vortices were unexpectedly short-lived, researchers from Stanford University report December 6 in Bioinspiration and Biomimetics. The results could help scientists better understand how animals fly, and could be important for designing flying robots (SN: 2/7/15, p. 18). To study the complex air currents produced by birds’ flapping wings, the researchers trained a Pacific parrotlet, a small species of parrot, to fly through laser light — with the appropriate eye protection, of course. Study coauthor Eric Gutierrez, who recently graduated from Stanford, built tiny, 3-D‒printed laser goggles for the bird, named Obi. Gutierrez and colleagues tracked the air currents left in Obi’s wake by spraying a fine liquid mist in the air, and illuminating it with a laser spread out into a two-dimensional sheet. High-speed cameras recorded the action at 1,000 frames per second. The vortex produced by the bird “explosively breaks up,” says mechanical engineer David Lentink, a coauthor of the study. “The flow becomes very complex, much more turbulent.” Comparing three standard methods for calculating the lift produced by flapping wings showed that predictions didn’t match reality, thanks to the unexpected vortex breakup. |© Society for Science & the Public 2000 - 20
Link ID: 22952 - Posted: 12.06.2016
Barbara J. King Birdsong is music to human ears. It has inspired famous composers. For the rest of us, it may uplift the spirit and improve attention or simply be a source of delight, fun and learning. But have you ever wondered what birds themselves hear when they sing? After all, we know that other animals' perceptions don't always match ours. Anyone who lives with a dog has probably experienced their incredible acute hearing and smell. Psychologists Robert J. Dooling and Nora H. Prior think they've found an answer to that question — for, at least, some birds. In an article published online last month in the journal Animal Behaviour, they conclude that "there is an acoustic richness in bird vocalizations that is available to birds but likely out of reach for human listeners." Dooling and Prior explain that most scientific investigations of birdsong focus on things like pitch, tempo, complexity, structural organization and the presence of stereotypy. They instead focused on what's called temporal fine structure and its perception by zebra finches. Temporal fine structure, they write, "is generally defined as rapid variations in amplitude within the more slowly varying envelope of sound." Struggling to fully grasp that definition, I contacted Robert Dooling by email. In his response, he suggested that I think of temporal fine structure as "roughly the difference between voices when they are the same pitch and loudness." Temporal fine structure is akin, then, to timbre, sometimes defined as "tone color" or, in Dooling's words, the feature that's "left between two complex sounds when the pitch and level are equalized." © 2016 npr
By Torah Kachur, A dog's nose is an incredible scent detector. This ability has been used to train bomb-sniffing dogs, narcotics and contraband sniffers as well as tracking hounds. But even the best electronic scent-detection devices — which use the dog's nose as their gold standard — have never been able to quite live up to their canine competition. But new research — which took a plastic dog nose and strapped it to a bomb sniffing device — might change that. The shape and function of a dog's nose is being used to improve electronic scent detectors. (Flickr / montillon.a) Dogs have almost 300 million smell receptors in their noses, compared to the meagre six million us humans have: their sense of smell is more than 40 times better than ours. But those smell receptors are just part of the puzzle. Matthew Staymates, lead author on a new paper published Thursday, figured that the canine sniffing skill also has something to do with the anatomy of a dog's nose. A former roommate of his had done his PhD in dog nose anatomy and actually had a computer model of a dog's nose and entire head. So Staymates used a 3D printer, printed out a dog's nose, and attached it to an electronic detector. "Sure enough, a week or two later, I had a fully functioning, anatomically correct dog's nose that sniffs like a real dog." From that, he worked with something called a schlieren imager to watch air go in and out of a nose when the dog is snuffling around the ground. ©2016 CBC/Radio-Canada.
By Virginia Morell If you’ve ever watched ants, you’ve probably noticed their tendency to “kiss,” quickly pressing their mouths together in face-to-face encounters. That’s how they feed each other and their larvae. Now, scientists report that the insects are sharing much more than food. They are also communicating—talking via chemical cocktails designed to shape each other and the colonies they live in. The finding suggests that saliva exchange could play yet-undiscovered roles in many other animals, from birds to humans, says Adria LeBoeuf, an evolutionary biologist at the University of Lausanne in Switzerland, and the study’s lead author. “We’ve paid little attention to what besides direct nutrition is being transmitted” in ants or other species, adds Diana Wheeler, an evolutionary biologist at the University of Arizona in Tucson, who was not involved with the work. Social insects—like ants, bees, and wasps—have long been known to pass food to one another through mouth-to-mouth exchange, a behavior known as trophallaxis. They store liquid food in “social stomachs,” or crops, from which they can regurgitate it later. It’s how nutrients are passed from foraging ants to nurse ants, and from nurses to the larvae in a colony. Other research has suggested that ants also use trophallaxis to spread the colony’s odor, helping them identify their own nest mates. © 2016 American Association for the Advancement of Science
Keyword: Chemical Senses (Smell & Taste)
Link ID: 22928 - Posted: 11.29.2016
By Ben Andrew Henry As a graduate student in the field of olfactory neuroscience, conducting what his former mentor describes as ambitiously clever research, Jason Castro felt something was missing. “I wanted to use science to make a connection with people,” he says, not just to churn out results. In 2012, the 34-year-old Castro accepted a faculty position at Bates College, a small liberal arts school in Maine, in order to “do the science equivalent of running a mom-and-pop—a small operation, working closely with students, and staying close to the data and the experiments myself,” he says. Students who passed through his lab or his seminars recall Castro as a dedicated mentor. “He spent hours with me just teaching me how to code,” recalled Torben Noto, a former student who went on to earn a PhD in neuroscience. After he arrived at Bates, Castro, along with two computational scientists, enlisted big-data methodologies to search for the olfactory equivalent of primary colors: essential building blocks of the odors we perceive. Their results, based on a classic set of data in which thousands of participants described various odors, identify 10 basic odor categories.1 Castro launched another project a few months later, when a paper published in Science reported that humans could discriminate between at least a trillion different odors. A friend from grad school, Rick Gerkin, smelled something fishy about the findings and gave Castro a call. “We became obsessed with the topic,” says Gerkin, now at Arizona State University. The researchers spent almost two years pulling apart the statistical methods of the study, finding that little tweaks to parameters such as the number of test subjects created large swings in the final estimate—a sign that the results were not robust.2 This August, the original study’s authors published a correction in Science. © 1986-2016 The Scientist
Keyword: Chemical Senses (Smell & Taste)
Link ID: 22922 - Posted: 11.29.2016
By Yasemin Saplakoglu Even if you don’t have rhythm, your pupils do. In a new study, neuroscientists played drumming patterns from Western music, including beats typical in pop and rock, while asking volunteers to focus on computer screens for an unrelated fast-paced task that involved pressing the space bar as quickly as possible in response to a signal on the screen. Unbeknownst to the participants, the music omitted strong and weak beats at random times. (You can listen below for an example of a music clip they used. If you listen carefully, you can hear bass and hi-hat beats omitted throughout.) Eye scanners tracked the dilations of the subjects’ pupils as the music played. Their pupils enlarged when the rhythms dropped certain beats, even though the participants weren’t paying attention to the music. The biggest dilations matched the omissions of the beats in the most prominent locations in the music, usually the important first beat in a repeated set of notes. The results suggest that we may have an automatic sense of “hierarchical meter”—a pattern of strong and weak beats—that governs our expectations of music, the researchers write in the February 2017 issue of Brain and Cognition. Perhaps, the authors say, our eyes reveal clues into the importance that music and rhythm plays in our lives. © 2016 American Association for the Advancement of Science
By Andy Coghlan It may sound like a healthy switch, but sometimes people who drink diet soft drinks put on more weight and develop chronic disorders like diabetes. This has puzzled nutritionists, but experiments in mice now suggest that in some cases, this could partly be down to the artificial sweetener aspartame. Artificial sweeteners that contain no calories are synthetic alternatives to sugar that can taste up to 20,000 times sweeter. They are often used in products like low or zero-calorie drinks and sugar-free desserts, and are sometimes recommended for people who have type 2 diabetes. But mouse experiments now suggest that when aspartame breaks down in the gut, it may disrupt processes that are vital for neutralising harmful toxins from the bacteria that live there. By interfering with a crucial enzyme, these toxins seem to build up, irritating the gut lining and causing the kinds of low-level inflammation that can ultimately cause chronic diseases. “Our results are providing a mechanism for why aspartame may not always work to keep people thin, or even cause problems like obesity, heart disease, diabetes and metabolic syndrome,” says Richard Hodin at Massachusetts General Hospital in Boston. Irritating bacteria © Copyright Reed Business Information Ltd.
Ramin Skibba Bats sing just as birds and humans do. But how they learn their melodies is a mystery — one that scientists will try to solve by sequencing the genomes of more than 1,000 bat species. The project, called Bat 1K, was announced on 14 November at the annual meeting of the Society for Neuroscience in San Diego, California. Its organizers also hope to learn more about the flying mammals’ ability to navigate in the dark through echolocation, their strong immune systems that can shrug off Ebola and their relatively long lifespans. “The genomes of all these other species, like birds and mice, are well-understood,” says Sonja Vernes, a neurogeneticist at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands, and co-director of the project. “But we don’t know anything about bat genes yet.” Some bats show babbling behaviour, including barks, chatter, screeches, whistles and trills, says Mirjam Knörnschild, a behavioural ecologist at Free University Berlin, Germany. Young bats learn the songs and sounds from older male tutors. They use these sounds during courtship and mating, when they retrieve food and as they defend their territory against rivals. Scientists have studied the songs of only about 50 bat species so far, Knörnschild says, and they know much less about bat communication than about birds’. Four species of bats have so far been found to learn songs from each other, their fathers and other adult males, just as a child gradually learns how to speak from its parents1. © 2016 Macmillan Publishers Limited,
By ARNAUD COLINART, AMAURY LA BURTHE, PETER MIDDLETON and JAMES SPINNEY “What is the world of sound?” So begins a diary entry from April 1984, recorded on audiocassette, about the nature of acoustic experience. The voice on the tape is that of the writer and theologian John Hull, who at the time of the recording had been totally blind for almost two years. After losing his sight in his mid-40s, Dr. Hull, a newlywed with a young family, had decided that blindness would destroy him if he didn’t learn to understand it. For three years he recorded his experiences of sight loss, documenting “a world beyond sight.” We first met Dr. Hull in 2011, having read his acclaimed 1991 book “Touching The Rock: An Experience of Blindness,” which was transcribed from his audio diaries. We began collaborating with him on a series of films using his original recordings. These included an Emmy-winning Op-Doc in 2014 and culminated in the feature-length documentary “Notes on Blindness.” But we were also interested in how interactive forms of storytelling might further explore Dr. Hull’s vast and detailed account — in particular how new mediums like virtual reality could illuminate his investigations into auditory experience. The diaries describe his evolving appreciation of “the breadth and depth of three-dimensional world that is revealed by sound,” the awakening of an acoustic perception of space. The sound of falling rain, he said, “brings out the contours of what is around you”; wind brings leaves and trees to life; thunder “puts a roof over your head.” This interactive experience is narrated by Dr. Hull, using extracts from his diary recordings to consider the nature of acoustic space. Binaural techniques map the myriad details of everyday life (in this case, the noises that surround Dr. Hull in a park) within a 3-D sound environment, a “panorama of music and information,” rich in color and texture. The real-time animation visualizes this multilayered soundscape in which, Dr. Hull says, “every sound is a point of activity.” © 2016 The New York Times Company
Amir Kheradmand, When we spin—on an amusement park ride or the dance floor—we often become disoriented, even dizzy. So how do professional athletes, particularly figure skaters who spin at incredible speeds, avoid losing their balance? The short answer is training, but to really grasp why figure skaters can twirl without getting dizzy requires an understanding of the vestibular system, the apparatus in our inner ear that helps to keep us upright. This system contains special sensory nerve cells that can detect the speed and direction at which our head moves. These sensors are tightly coupled with our eye movements and with our perception of our body's position and motion through space. For instance, if we rotate our head to the right while our eyes remain focused on an object straight ahead, our eyes naturally move to the left at the same speed. This involuntary response allows us to stay focused on a stationary object. Spinning is more complicated. When we move our head during a spin, our eyes start to move in the opposite direction but reach their limit before our head completes a full 360-degree turn. So our eyes flick back to a new starting position midspin, and the motion repeats as we rotate. When our head rotation triggers this automatic, repetitive eye movement, called nystagmus, we get dizzy. © 2016 Scientific American
Link ID: 22878 - Posted: 11.17.2016
Laura Sanders SAN DIEGO — Mice raised in cages bombarded with glowing lights and sounds have profound brain abnormalities and behavioral trouble. Hours of daily stimulation led to behaviors reminiscent of attention-deficit/hyperactivity disorder, scientists reported November 14 at the annual meeting of the Society for Neuroscience. Certain kinds of sensory stimulation, such as sights and sounds, are known to help the brain develop correctly. But scientists from Seattle Children’s Research Institute wondered whether too much stimulation or stimulation of the wrong sort could have negative effects on the growing brain. To mimic extreme screen exposure, mice were blasted with flashing lights and TV audio for six hours a day. The cacophony began when the mice were 10 days old and lasted for six weeks. After the end of the ordeal, scientists examined the mice’s brains. “We found dramatic changes everywhere in the brain,” said study coauthor Jan-Marino Ramirez. Mice that had been stimulated had fewer newborn nerve cells in the hippocampus, a brain structure important for learning and memory, than unstimulated mice, Ramirez said. The stimulation also made certain nerve cells more active in general. Stimulated mice also displayed behaviors similar to some associated with ADHD in children. These mice were noticeably more active and had trouble remembering whether they had encountered an object. The mice also seemed more inclined to take risks, venturing into open areas that mice normally shy away from, for instance. |© Society for Science & the Public 2000 - 2016.
Laura Sanders SAN DIEGO — A nerve-zapping headset caused people to shed fat in a small preliminary study. Six people who had received the stimulation lost on average about 8 percent of the fat on their trunks in four months, scientists reported November 12 at the annual meeting of the Society for Neuroscience. The headset stimulated the vestibular nerve, which runs just behind the ears. That nerve sends signals to the hypothalamus, a brain structure thought to control the body’s fat storage. By stimulating the nerve with an electrical current, the technique shifts the body away from storing fat toward burning it, scientists propose. Six overweight and obese people received the treatment, consisting of up to four one-hour-long sessions of stimulation a week. Because it activates the vestibular system, the stimulation evoked the sensation of gently rocking on a boat or floating in a pool, said study coauthor Jason McKeown of the University of California, San Diego. After four months, body scans measured the trunk body fat for the six people receiving the treatment and three people who received sham stimulation. All six in the treatment group lost some trunk fat, despite not having changed their activity or diet. In contrast, those in the sham group gained some fat. Researchers suspect that metabolic changes are behind the difference. “The results were a lot better than we thought they’d be,” McKeown said. |© Society for Science & the Public 2000 - 2016.
Link ID: 22869 - Posted: 11.15.2016
By Rachel Feltman and Sarah Kaplan Dear Science, I just got a new iPhone and can't decide what kind of headphones I should be using. I read somewhere that ear buds are worse for you than headphones that fit over your ear. Is that true? I don't want to damage my hearing by using the wrong thing. Here's what science has to say: At the end of the day, nothing really matters but volume. No pair of headphones is inherently “good” or “bad” for your hearing. But picking the right headphones can help you listen to your music more responsibly. The louder a sound is, the more quickly it can cause injury to your ears. If you're not careful, a powerful sound wave can actually tear right through your delicate eardrum, but that's unlikely to happen while blasting music. Most hearing loss is the result of nerve damage, and your smartphone is more than capable of wrecking your ears that way. You can be exposed to 85 decibels — the noise of busy city traffic — pretty much all day without causing nerve damage, but things quickly become dangerous once you get louder than that. At 115 decibels, which is about the noise level produced at a rock concert or by a chain saw, nerve damage can happen in less than a minute. You might not immediately notice significant hearing loss as the result of that nerve damage, but it will add up over time. Some smartphones can crank music to 120 decibels. If you listened to an entire album at that volume, you might have noticeable hearing loss by the time you took off your headphones. According to the World Health Organization, 1.1 billion teens and young adults globally are at risk of developing hearing loss because of these “personal audio devices.” You already know the solution, folks: Turn that music down. © 1996-2016 The Washington Post
Link ID: 22867 - Posted: 11.15.2016
by Helen Thompson Narwhals use highly targeted beams of sound to scan their environment for threats and food. In fact, the so-called unicorns of the sea (for their iconic head tusks) may produce the most refined sonar of any living animal. A team of researchers set up 16 underwater microphones to eavesdrop on narwhal click vocalizations at 11 ice pack sites in Greenland’s Baffin Bay in 2013. The recordings show that narwhal clicks are extremely intense and directional — meaning they can widen and narrow the beam of sound to find prey over long and short distances. It’s the most directional sonar signal measured in a living species, the researchers report November 9 in PLOS ONE. The sound beams are also asymmetrically narrow on top. That minimizes clutter from echoes bouncing off the sea surface or ice pack. Finally, narwhals scan vertically as they dive, which could help them find patches of open water where they can surface and breathe amid sea ice cover. All this means that narwhals employ pretty sophisticated sonar. The audio data could help researchers tell the difference between narwhal vocalizations and those of neighboring beluga whales. It also provides a baseline for assessing the potential impact of noise pollution from increases in shipping traffic made possible by sea ice loss. |© Society for Science & the Public 2000 - 2016.
Link ID: 22856 - Posted: 11.12.2016
By Bob Holmes It’s not something to be sniffed at. Computers have cracked a problem that has stumped chemists for centuries: predicting a molecule’s odour from its structure. The feat may allow perfumers and flavour specialists to create new products with much less trial and error. Unlike vision and hearing, the result of which can be predicted by analysing wavelengths of light or sound, our sense of smell has long remained inscrutable. Olfactory chemists have never been able to predict how a given molecule will smell, except in a few special cases, because so many aspects of a molecule’s structure could be important in determining its odour. Andreas Keller and Leslie Vosshall at Rockefeller University in New York City decided to crowdsource the power of machine learning to address the problem. First, they had 49 volunteers rate the odour of 476 chemicals according to how intense and how pleasant the smell was, and how well it matched 19 other descriptors, such as garlic, spice or fruit. Then they released the data for 407 of the chemicals, along with 4884 different variables measuring chemical structure, and invited anyone to develop machine-learning algorithms that would make sense of the patterns. They used the remaining 69 chemicals to evaluate the accuracy of the algorithms of the 22 teams that took up the challenge. © Copyright Reed Business Information Ltd.
Keyword: Chemical Senses (Smell & Taste)
Link ID: 22806 - Posted: 10.29.2016
By Jessica Boddy You’d probably never notice a jumping spider across your living room, but it would surely notice you. The arachnids are known for their brilliant eyesight, and a new study shows they have even greater sensory prowess than we thought: Jumping spiders can hear sounds even though they don’t have ears—or even eardrums. To find this out, researchers implanted tiny electrodes in a region of spiders’ brains that would show whether sound was being processed. Then they placed the spiders on a specially designed box to eliminate any vibrations from below—most spiders sense their surroundings through vibrations—and scared the heck out of them with a speaker-produced buzz of one of their predators, the mud dauber wasp. An out-of-earshot, high-frequency buzz and a silent control elicited no response from the spiders. But the 80-hertz wasp buzz made them freeze and look around, startled, just as they would do in the wild. What’s more, data from the electrodes showed a spike in brain activity with each buzz, revealing that spiders actually hear sounds, from a swooping mud dauber wasp to you crunching potato chips on your couch. The researchers, who publish their work today in Current Biology, say further study is needed to see exactly how spiders receive sounds without eardrums, but they believe sensitive hairs on their legs play a part. © 2016 American Association for the Advancement of Science.
Link ID: 22755 - Posted: 10.15.2016
By Virginia Morell Human-produced noise in the ocean is likely harming marine mammals in numerous unknown ways, according to a comprehensive new report from the National Academies of Sciences, Engineering, and Medicine. That’s because there are insufficient data to determine how the ill effects of noise created by ships, sonar signals, and other activities interact with other threats, including pollution, climate change, and the loss of prey due to fishing. The report, which was sponsored by several government agencies and released on 7 October, provides a new framework for researchers to begin exploring these cumulative impacts. “There’s a growing recognition that interactions between stressors on marine mammals can’t right now be accurately assessed," said Peter Tyack, a marine mammal biologist at the University of St Andrews in the United Kingdom, in a webinar on the report. Tyack also chaired the committee that prepared the study, "Approaches to Understanding the Cumulative Effects of Stressors on Marine Mammals." Killer whales, for instance, are known to swim away from areas where they have encountered sonar signals of about 142 decibels, a sound level lower than currently allowed by the U.S. Navy for its ships, Tyack said, referring to a 2014 study in The Journal of the Acoustical Society of America that determined the mammals’ likely response. But scientists don’t yet know how other marine mammals might respond. They also don’t know whether or how other factors, such as encountering an oil spill or colliding with a ship, would—or would not—compound the cetaceans’ response to these sounds; or how or whether such combined stressors matter to the animals’ long-term health and overall population. © 2016 American Association for the Advancement of Science.