Chapter 9. Hearing, Vestibular Perception, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
by Frank Swain WHEN it comes to personal electronics, it's difficult to imagine iPhones and hearing aids in the same sentence. I use both and know that hearing aids have a well-deserved reputation as deeply uncool lumps of beige plastic worn mainly by the elderly. Apple, on the other hand, is the epitome of cool consumer electronics. But the two are getting a lot closer. The first "Made for iPhone" hearing aids have arrived, allowing users to stream audio and data between smartphones and the device. It means hearing aids might soon be desirable, even to those who don't need them. A Bluetooth wireless protocol developed by Apple last year lets the prostheses connect directly to Apple devices, streaming audio and data while using a fraction of the power consumption of conventional Bluetooth. LiNX, made by ReSound (pictured), and Halo hearing aids made by Starkey – both international firms – use the iPhone as a platform to offer users new features and added control over their hearing aids. "The main advantage of Bluetooth is that the devices are talking to each other, it's not just one way," says David Nygren, UK general manager of ReSound. This is useful as hearing aids have long suffered from a restricted user interface – there's not much room for buttons on a device the size of a kidney bean. This is a major challenge for hearing-aid users, because different environments require different audio settings. Some devices come with preset programmes, while others adjust automatically to what their programming suggests is the best configuration. This is difficult to get right, and often devices calibrated in the audiologist's clinic fall short in the real world. © Copyright Reed Business Information Ltd.
Link ID: 19757 - Posted: 06.23.2014
By Ian Randall The human tongue may have a sixth sense—and no, it doesn’t have anything to do with seeing ghosts. Researchers have found that in addition to recognizing sweet, sour, salty, savory, and bitter tastes, our tongues can also pick up on carbohydrates, the nutrients that break down into sugar and form our main source of energy. Past studies have shown that some rodents can distinguish between sugars of different energy densities, while others can still tell carbohydrate and protein solutions apart even when their ability to taste sweetness is lost. A similar ability has been proposed in humans, with research showing that merely having carbohydrates in your mouth can improve physical performance. How this works, however, has been unclear. In the new study, to be published in Appetite, the researchers asked participants to squeeze a sensor held between their right index finger and thumb when shown a visual cue. At the same time, the participants’ tongues were rinsed with one of three different fluids. The first two were artificially sweetened—to identical tastes—but with only one containing carbohydrate; the third, a control, was neither sweet nor carb-loaded. When the carbohydrate solution was used, the researchers observed a 30% increase in activity for the brain areas that control movement and vision. This reaction, they propose, is caused by our mouths reporting that additional energy in the form of carbs is coming. The finding may explain both why diet products are often viewed as not being as satisfying as their real counterparts and why carbohydrate-loaded drinks seem to immediately perk up athletes—even before their bodies can convert the carbs to energy. Learning more about how this “carbohydrate sense” works could lead to the development of artificially sweetened foods, the researchers propose, “as hedonistically rewarding as the real thing.” © 2014 American Association for the Advancement of Science
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19700 - Posted: 06.06.2014
by Catherine de Lange Could your ideal diet be written in your genes? That's the promise of nutrigenomics, which looks for genetic differences in the way people's bodies process food so that diets can be tailored accordingly. The field had a rocky start after companies overhyped its potential, but with advances in genetic sequencing, and a slew of new studies, the concept is in for a reboot. Last week, Nicola Pirastu at the University of Trieste, Italy, and his colleagues told the European Society of Human Genetics meeting in Milan that diets tailored to genes that are related to metabolism can help people lose weight. The team used the results of a genetic test to design specific diets for 100 obese people that also provided them with 600 fewer calories than usual. A control group was placed on a 600-calorie deficit, untailored diet. After two years, both groups had lost weight, but those in the nutrigenetic group lost 33 per cent more. They also took only a year to lose as much weight as the group on the untailored diet lost in two years. If this is shown to work in bigger, randomised trials, it would be fantastic, says Ana Valdes, a genetic epidemiologist at the University of Nottingham, UK. Some preliminary information will soon be available from Europe's Food4Me project. It is a study of 1200 people across several countries who were given either standard nutrition advice, or a similarly genetically tailored diet. "It's testing whether we can get bigger changes in diet using a personalised approach, and part of that is using genetic information," says team member John Mathers, director of the Human Nutrition Research Centre at Newcastle University, UK. © Copyright Reed Business Information Ltd.
|By Christie Nicholson Conventional wisdom once had it that each brain region is responsible for a specific task. And so we have the motor cortex for handling movements, and the visual cortex, for processing sight. And scientists thought that such regions remained fixed for those tasks beyond the age of three. But within the past decade researchers have realized that some brain regions can pinch hit for other regions, for example, after a damaging stroke. And now new research finds that the visual cortex is constantly doing double duty—it has a role in processing not just sight, but sound. When we hear [siren sound], we see a siren. In the study, scientists scanned the brains of blindfolded participants as the subjects listened to three sounds: [audio of birds, audio of traffic, audio of a talking crowd.] And the scientists could tell what specific sounds the subjects were hearing just by analyzing the brain activity in the visual cortex. [Petra Vetter, Fraser W. Smith and Lars Muckli, Decoding Sound and Imagery Content in Early Visual Cortex, in Current Biology] The next step is to determine why the visual cortex is horning in on the audio action. The researchers think the additional role conferred an evolutionary advantage: having a visual system primed by sound to see the source of that sound could have given humans an extra step in the race for survival. © 2014 Scientific American
Tastes are a privilege. The oral sensations not only satisfy foodies, but also on a primal level, protect animals from toxic substances. Yet cetaceans—whales and dolphins—may lack this crucial ability, according to a new study. Mutations in a cetacean ancestor obliterated their basic machinery for four of the five primary tastes, making them the first group of mammals to have lost the majority of this sensory system. The five primary tastes are sweet, bitter, umami (savory), sour, and salty. These flavors are recognized by taste receptors—proteins that coat neurons embedded in the tongue. For the most part, taste receptor genes present across all vertebrates. Except, it seems, cetaceans. Researchers uncovered a massive loss of taste receptors in these animals by screening the genomes of 15 species. The investigation spanned the two major lineages of cetaceans: Krill-loving baleen whales—such as bowheads and minkes—were surveyed along with those with teeth, like bottlenose dolphins and sperm whales. The taste genes weren’t gone per se, but were irreparably damaged by mutations, the team reports online this month in Genome Biology and Evolution. Genes encode proteins, which in turn execute certain functions in cells. Certain errors in the code can derail protein production—at which point the gene becomes a “pseudogene” or a lingering shell of a trait forgotten. Identical pseudogene corpses were discovered across the different cetacean species for sweet, bitter, umami, and sour taste receptors. Salty tastes were the only exception. © 2014 American Association for the Advancement of Science.
Jessica Morrison Interference from electronics and AM radio signals can disrupt the internal magnetic compasses of migratory birds, researchers report today in Nature1. The work raises the possibility that cities have significant effects on bird migration patterns. Decades of experiments have shown that migratory birds can orient themselves on migration paths using internal compasses guided by Earth's magnetic field. But until now, there has been little evidence that electromagnetic radiation created by humans affects the process. Like most biologists studying magnetoreception, report co-author Henrik Mouritsen used to work at rural field sites far from cities teeming with electromagnetic noise. But in 2002, he moved to the University of Oldenburg, in a German city of around 160,000 people. As part of work to identify the part of the brain in which compass information is processed, he kept migratory European robins (Erithacus rubecula) inside wooden huts — a standard procedure that allows researchers to investigate magnetic navigation while being sure that the birds are not getting cues from the Sun or stars. But he found that on the city campus, the birds could not orient themselves in their proper migratory direction. “I tried all kinds of stuff to make it work, and I couldn’t make it work,” Mouritsen says, “until one day we screened the wooden hut with aluminium.” Mouritsen and his colleagues covered the huts with aluminium plates and electrically grounded them to cut out electromagnetic noise in frequencies ranging from 50 kilohertz to 5 megahertz — which includes the range used for AM radio transmissions. The shielding reduced the intensity of the noise by about two orders of magnitude. Under those conditions, the birds were able to orient themselves. © 2014 Nature Publishing Group,
Keyword: Animal Migration
Link ID: 19590 - Posted: 05.08.2014
Humans stink, and it’s wonderful. A few whiffs of a pillow in the morning can revive memories of a lover. The sweaty stench of a gym puts us in the mood to exercise. Odors define us, yet the scientific zeitgeist is that we don’t communicate through pheromones—scents that influence behavior. A new study challenges that thinking, finding that scent can change whether we think someone is masculine or feminine. Humans carry more secretion and sweat glands in their skin than any other primate. Yet 70% of people lack a vomeronasal organ, a crescent-shaped bundle of neurons at the base of each nostril that allows a variety of species—from reptiles to nonprimate mammals—to pick up on pheromones. (If you’ve ever seen your cat huff something, he’s using this organ.) Still, scientists have continued to hunt for examples of pheromones that humans might sense. Two strong candidates are androstadienone (andro) and estratetraenol (estra). Men secrete andro in their sweat and semen, while estra is primarily found in female urine. Researchers have found hints that both trigger arousal—by improving moods and switching on the brain’s “urge” center, the hypothalamus—in the opposite sex. Yet to be true pheromones, these chemicals must shape how people view different genders. That’s exactly what they do, researchers from the Chinese Academy of Sciences in Beijing report online today in Current Biology. The team split men and women into groups of 24 and then had them watch virtual simulations of a human figure walking (see video). The head, pelvis, and major joints in each figure were replaced with moving dots. Subjects in prior studies had ranked the videos as being feminine or masculine. For instance, watch the figure on the far left, which was gauged as having a quintessential female strut. Notice a distinctive swagger in the “hip” dots and how they contrast with the flat gait of the “male” prototype all the way to the right. © 2014 American Association for the Advancement of Science
by Bethany Brookshire When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner. Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing. In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says. © Society for Science & the Public 2000 - 2013.
Regina Nuzzo Gene therapy delivered to the inner ear can help shrivelled auditory nerves to regrow — and in turn, improve bionic ear technology, researchers report today in Science Translational Medicine1. The work, conducted in guinea pigs, suggests a possible avenue for developing a new generation of hearing prosthetics that more closely mimics the richness and acuity of natural hearing. Sound travels from its source to ears, and eventually to the brain, through a chain of biological translations that convert air vibrations to nerve impulses. When hearing loss occurs, it’s usually because crucial links near the end of this chain — between the ear’s cochlear cells and the auditory nerve — are destroyed. Cochlear implants are designed to bridge this missing link in people with profound deafness by implanting an array of tiny electrodes that stimulate the auditory nerve. Although cochlear implants often work well in quiet situations, people who have them still struggle to understand music or follow conversations amid background noise. After long-term hearing loss, the ends of the auditory nerve bundles are often frayed and withered, so the electrode array implanted in the cochlea must blast a broad, strong signal to try to make a connection, instead of stimulating a more precise array of neurons corresponding to particular frequencies. The result is an ‘aural smearing’ that obliterates fine resolution of sound, akin to forcing a piano player to wear snow mittens or a portrait artist to use finger paints. To try to repair auditory nerve endings and help cochlear implants to send a sharper signal to the brain, researchers turned to gene therapy. Their method took advantage of the electrical impulses delivered by the cochlear-implant hardware, rather than viruses often used to carry genetic material, to temporarily turn inner-ear cells porous. This allowed DNA to slip in, says lead author Jeremy Pinyon, an auditory scientist at the University of New South Wales in Sydney, Australia. © 2014 Nature Publishing Group
By Floyd Skloot, March 27, 2009. I was fine the night before. The little cold I’d had was gone, and I’d had the first good night’s sleep all week. But when I woke up Friday morning at 6:15 and got out of bed, the world was whirling counterclockwise. I knocked against the bookcase, stumbled through the bathroom doorway and landed on my knees in front of the sink. It was as though I’d been tripped by a ghost lurking beside the bed. Even when I was on all fours, the spinning didn’t stop. Lightheaded, reaching for solid support, I made it back to bed and, showing keen analytical insight, told my wife, Beverly, “Something’s wrong.” The only way I could put on my shirt was to kneel on the floor first. I teetered when I rose. Trying to keep my head still, moving only my eyes, I could feel my back and shoulders tightening, forming a shell. Everything was in motion, out of proportion, unstable. I barely made it downstairs for breakfast, clutching the banister, concentrating on each step and, when I finally made it to the kitchen, feeling too aswirl to eat anyway. I didn’t realize it at the time, but those stairs would become my greatest risk during this attack of relentless, intractable vertigo. Vertigo — the feeling that you or your surroundings are spinning — is a symptom, not a disease. You don’t get a diagnosis of vertigo; instead, you present with vertigo, a hallmark of balance dysfunction. Or with dizziness, a more generalized term referring to a range of off-kilter sensations including wooziness, faintness, unsteadiness, spatial disorientation, a feeling akin to swooning. It happens to almost everyone: too much to drink or standing too close to the edge of a roof or working out too hard or getting up too fast. © 1996-2014 The Washington Post
Link ID: 19516 - Posted: 04.22.2014
|By Daisy Yuhas Strange as it may sound, some scientists suspect that the humble armpit could be sending all kinds of signals from casual flirtation to sounding the alarm. That’s because the body’s secretions, some stinky and others below the threshold your nose can detect, may be rife with chemical messages called pheromones. Yet despite half a century of research into these subtle cues, we have yet to find direct evidence of their existence in humans. Humans and other animals have an olfactory system designed to detect and discriminate between thousands of chemical compounds. For more than 50 years, scientists have been aware of the fact that certain insects and animals can release chemical compounds—often as oils or sweat—and that other creatures can detect and respond to these compounds, which allows for a form of silent, purely chemical communication. Although the exact definition has been debated and redefined several times, pheromones are generally recognized as single or small sets of compounds that transmit signals between organisms of the same species. They are typically just one part of the larger potpourri of odorants emitted from an insect or animal, and some pheromones do not have a discernable scent. Since pheromones were first defined in 1959, scientists have found many examples of pheromonal communication. The most striking of these signals elicits an immediate behavioral response. For example, the female silk moth releases a trail of the molecule bombykol, which unerringly draws males from the moment they encounter it. Slower-acting pheromones can affect the recipient’s reproductive physiology, as when the alpha-farnesene molecule in male mouse urine accelerates puberty in young female mice. © 2014 Scientific American
By KATHERINE BOUTON Like almost all newborns in this country, Alex Justh was given a hearing test at birth. He failed, but his parents were told not to worry: He was a month premature and there was mucus in his ears. A month later, an otoacoustic emission test, which measures the response of hair cells in the inner ear, came back normal. Alex was the third son of Lydia Denworth and Mark Justh (pronounced Just), and at first they “reveled at what a sweet and peaceful baby he was,” Ms. Denworth writes in her new book, “I Can Hear You Whisper: An Intimate Journey Through the Science of Sound and Language,” being published this week by Dutton. But Alex began missing developmental milestones. He was slow to sit up, slow to stand, slow to walk. His mother felt a “vague uneasiness” at every delay. He seemed not to respond to questions, the kind one asks a baby: “Can you show me the cow?” she’d ask, reading “Goodnight, Moon.” Nothing. No response. At 18 months Alex unequivocally failed a hearing test, but there was still fluid in his ears, so the doctor recommended a second test. It wasn’t until 2005, when Alex was 2 ½, that they finally realized he had moderate to profound hearing loss in both ears. This is very late to detect deafness in a child; the ideal time is before the first birthday. Alex’s parents took him to Dr. Simon Parisier, an otolaryngologist at New York Eye and Ear Infirmary, who recommended a cochlear implant as soon as possible. “Age 3 marked a critical juncture in the development of language,” Ms. Denworth writes. “I began to truly understand that we were not just talking about Alex’s ears. We were talking about his brain.” © 2014 The New York Times Company
|By Simon Makin Scientists have observed that reading ability scales with socioeconomic status. Yet music might help close the gap, according to Nina Kraus and her colleagues at Northwestern University. Kraus's team tested the auditory abilities of teenagers aged 14 or 15, grouped by socioeconomic status (as indexed by their mother's level of education, a commonly used surrogate measure). The researchers recorded the kids' brain waves with EEG as they listened to a repeated syllable against soft background sound and when they heard nothing. They found that children of mothers with a lower education had noisier, weaker and more variable neural activity in response to sound and greater activity in the absence of sound. The children also scored lower on tests of reading and working memory. Kraus thinks music training is worth investigating as a possible intervention for such auditory deficits. The brains of trained musicians differ from nonmusicians, and they also enjoy a range of auditory advantages, including better speech perception in noise, according to research from Kraus's laboratory. The researchers admit that this finding could be the result of preexisting differences that predispose some people to choose music as a career or hobby, but they point out that some experimental studies show that musical training, whether via one-on-one lessons or in group sessions, enhances people's response to speech. Most recently Kraus's group has shown that these effects may last. Kraus surveyed 44 adults aged 55 to 76 and found that four or more years of musical training in childhood was linked to faster neural responses to speech, even for the older adults who had not picked up an instrument for more than 40 years. © 2014 Scientific American
If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science
Dr Nicola Davis The electronic nose in an instrument that attempts to mimic the human olfactory system. Humans and animals don't identify specific chemicals within odours; what they do is to recognise a smell based on a response pattern. You, as a human, will smell a strawberry and say "that's a strawberry". If you gave this to a traditional analytical piece of equipment, it might tell you what the 60-odd chemicals in the odour were - but that wouldn't tell you that it was a strawberry. How does it work? A traditional electronic nose has an array of chemical sensors, designed either to detect gases or vapours. These sensors are not tuned to a single chemical, but detect families of chemicals - [for example] alcohols. Each one of these sensors is different, so when they are presented to a complex odour formed of many chemicals, each sensor responds differently to that odour. This creates a pattern of sensor responses, which the machine can be taught [to recognise]. Can't we just use dogs? A dog is very, very sensitive. Special research teams work on training dogs to detect cancers as you would do explosives. What you we are trying to do with the electronic nose is create an artificial means of replicating what the dog does. Such machines have the advantage that they don't get tired, will work all day and you only need to feed them electricity. © 2014 Guardian News and Media Limited
Nicola Davis The moment when 40-year old Joanne Milne, who has been deaf since birth, first hears sound is heart-wrenching scene. Amateur footage showing her emotional reaction has taken social media by storm and touched viewers across the world, reinforcing the technological triumph of cochlear implants. It’s a story I have touched on before. Earlier this month I wrote about how cochlear implants changed the lives of the Campbells whose children Alice and Oliver were born with the condition auditory neuropathy spectrum disorder (ANSD). Implants, together with auditory verbal therapy, have allowed them to embrace the hearing world. It was incredibly moving to glimpse the long and difficult journey this family had experienced, and the joy that hearing - a sense so many of us take for granted - can bring. Cochlear implants are not a ‘cure’ for deafness. They make use of electrodes to directly stimulate auditory nerve fibres in the cochlea of the inner ear, creating a sense of sound that is not the same as that which hearing people experience, but nevertheless allows users to perceive speech, develop language and often enjoy music. As an adult Milne, who was born with the rare condition Usher syndrome, is unusual in receiving cochlear implants on both sides. Such bilateral implantation enables users to work out where sounds are coming from, enhances speech perception in bustling environments and means that should something go wrong with one device, the user isn’t cut off from the hearing world. © 2014 Guardian News
Dragonflies are full of surprises. They have six legs, but most can’t walk. Their giant, 30,000-lens eyes can detect ultraviolet light. And though they lack the brain architecture normally required for a sense of smell, a new study finds that dragonflies may use odors to hunt prey. Smelling, as we humans understand it, requires certain hardware. Our noses are packed with olfactory receptors, each of which is tuned to a precise scent molecule. (Indeed, a recent study suggests we can detect a trillion smells.) When one wafts into our nostrils, these receptors send nerve signals to sensory way stations called glomeruli, which pass them along to the brain for interpretation—“Oh, a rose!” Glomeruli are shared by most terrestrial mammals and insects, and until now, scientists believed they represented the only possible route to a sense of smell. Because dragonflies and their close cousins, damselflies, don’t possess glomeruli or any higher order smell centers in their brains, most scientists believed these insects were unable to smell anything at all. Invertebrate biologist Manuela Rebora at the University of Perugia in Italy was not one of them. When her team took a closer look at dragonfly and damselfly antennae with an electron microscope, they spotted tiny bulbs in pits that resembled olfactory sensilla. Like the insect equivalent of a nose, these sensilla house olfactory neurons. When Rebora’s team exposed the suspected sensilla to scents, they emitted nerve pulses, supporting the idea that damselflies and dragonflies perceive odors. © 2014 American Association for the Advancement of Science.
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19406 - Posted: 03.25.2014
By ANNE EISENBERG People who strain to hear conversations in noisy places sometimes shun hearing appliances as telltale signs of aging. But what if those devices looked like wireless phone receivers? Some companies are betting that the high-tech look of a new generation of sound amplifiers will tempt people to try them. The new in-ear amps come with wireless technology and typically cost $300 to $500. The devices include directional microphones and can be fine-tuned by smartphone apps. Whatever you do, don’t call these amplifiers hearing aids. They are not considered medical devices like the ones overseen by the Food and Drug Administration and dispensed by professionals to aid those with impaired hearing. Rather, they are over-the-counter systems cleared by the F.D.A. for occasional use in situations when speech and other sounds are hard to discern — say, in a noisy restaurant or while bird-watching. “The market is proliferating with lots of devices not necessarily made for impaired hearing, but for someone who wants a boost in certain challenging conditions like lectures,” said Neil J. DiSarno, chief staff officer for audiology at the American Speech-Language-Hearing Association. Dr. DiSarno is among the many audiologists who strongly urge people to see a physician first, in order to rule out medical causes of hearing loss, which could vary from earwax to a tumor, rather than self-diagnosing and self-treating a condition. Carole Rogin, president of the Hearing Industries Association, a trade group, said the biggest problem with personal amplification products was that people might use them instead of seeking appropriate medical oversight. “Untreated hearing loss is not a benign condition,” she said. “We want people to do something about it as soon as they notice a problem,” rather than using these devices to mask a potentially dangerous condition. © 2014 The New York Times Company
Link ID: 19401 - Posted: 03.24.2014
Jessica Morrison The human nose has roughly 400 types of scent receptors that can detect at least 1 trillion different odours. The human nose can distinguish at least 1 trillion different odours, a resolution orders of magnitude beyond the previous estimate of just 10,000 scents, researchers report today in Science1. Scientists who study smell have suspected a higher number for some time, but few studies have attempted to explore the limits of the human nose’s sensory capacity. “It has just been sitting there for somebody to do,” says study co-author Andreas Keller, an olfactory researcher at the Rockefeller University in New York. To investigate the limits of humans' sense of smell, Keller and his colleagues prepared scent mixtures with 10, 20 or 30 components selected from a collection of 128 odorous molecules. Then they asked 26 study participants to identify the mixture that smelled differently in a sample set where two of three scents were the same. When the two scents contained components that overlapped by more than about 51%, most participants struggled to discriminate between them. The authors then calculated the number of possible mixtures that overlap by less than 51% to arrive at their estimate of how many smells a human nose can detect: at least 1 trillion. Donald Wilson, an olfactory researcher at the New York University School of Medicine, says the findings are “thrilling.” He hopes that the new estimate will help researchers begin to unravel an enduring mystery: how the nose and brain work together to process smells. © 2014 Nature Publishing Group,
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19394 - Posted: 03.21.2014
By Lenny Bernstein When your name is Leonard Bernstein, and you can’t play or sing a note, people are, understandably, a bit prone to noting this little irony. But now I have an explanation: My lack of musical aptitude is mostly genetic. Finnish researchers say they have found genes responsible for auditory response and neuro-cognitive processing that partially explain musical aptitude. They note “several genes mostly related to the auditory pathway, not only specifically to inner ear function, but also to neurocognitive processes.” The study was published in the March 11 issue of the journal “Molecular Psychiatry.” In an e-mail, one of the researchers, Irma Jarvela, of the University of Helsinki’s department of medical genetics, said heredity explains 60 percent of the musical ability passed down through families like Bach’s. The rest can be attributed to environment and training. Genes most likely are responsible for “better perception skills of different sounds,” Jarvela said. Feel free to cite this research at your next karaoke night. © 1996-2014 The Washington Post