Chapter 6. Hearing, Balance, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Michelle Roberts Health editor, BBC News online The thousands of aromas humans can smell can be sorted into 10 basic categories, US scientists say. Prof Jason Castro, of Bates College, and Prof Chakra Chennubhotla, of the University of Pittsburgh, used a computerised technique to whittle down smells to their most basic essence. They told the PLoS One journal they had then tested 144 of these and found they could be grouped into 10 categories. The findings are contentious - some say there are thousands of permutations. Prof Castro said: "You have these 10 basic categories because they reflect important attributes about the world - danger, food and so on. "If you know these basic categories, then you can start to think about building smells. "We have not solved the problem of predicting a smell based on its chemical structure, but that's something we hope to do." He said it would be important to start testing the theory on more complex aromas, such as perfumes and everyday smells. In reality, any natural scent was likely to be a complex mix - a blend of the 10 different categories, he said. Prof Tim Jacob, a UK expert in smell science at Cardiff University, said: "In the 1950s a scientist called John Amoore proposed a theory which involved seven smell categories based upon molecular shape and size. BBC © 2013
Keyword: Chemical Senses (Smell & Taste)
Link ID: 18671 - Posted: 09.19.2013
Amanda Fiegl What's the difference between a spicy meal and being tickled? Not much, from your lips' perspective. A new study reports that Szechuan pepper activates the same nerves that respond to a light physical touch. Researchers at the University College London Institute of Cognitive Neuroscience found that people experienced the same sensation when either Szechuan pepper—a spice used in many types of Asian cuisine—or a machine vibrating at a particular frequency was placed on their lips. "The pepper is sending the same information to the brain as having a buzzer on your lips," the study's lead author, Nobuhiro Hagura, said in an email. The study, published today in Proceedings of the Royal Society B with the wry headline "Food Vibrations," delves into the little-known field of psychophysics, which "describes the relation between physical reality and what we actually perceive," Hagura said. "Our research shows just one interesting example of a case where we perceive something quite different than what is actually there," he said. "In many cases, the difference between perception and reality can be explained by understanding how the nervous system transmits information about the outside world to the brain." Previous studies have shown that other spicy ingredients, such as chili peppers and mustard oils, activate the nerve fibers associated with pain and physical heat. And studies in animals indicated that the spicy chemical in Szechuan pepper—sanshool—acts on the nervous system's "light touch" fibers. So Hagura and his colleagues wanted to find out whether sanshool produces a conscious sensation of touch in humans. © 1996-2013 National Geographic Society.
By Bruce Bower Babies have an ear for primeval dangers, a new study suggests. By age 9 months, infants pay special attention to sounds that have signaled threats to children’s safety and survival throughout human evolution, say psychologist Nicole Erlich of the University of Queensland, Australia, and her colleagues. Those sounds include a snake hissing, adults’ angry voices, a crackling fire, thunder claps and — as a possible indicator of a nearby but unseen danger — another infant’s cries. Noises denoting modern dangers, as well as pleasant sounds, failed to attract the same level of interest from 9-month-olds, Erlich and her colleagues report Aug. 27 in Developmental Science. People can learn to fear just about anything. But tens of thousands of years of evolution have primed infants’ brains to home in on longstanding perils, the scientists propose. “There is something special about evolutionarily threatening sounds that infants respond to,” Erlich says. Another study that supported that idea, by psychologist David Rakison of Carnegie Mellon University in Pittsburgh, found that 11-month-olds rapidly learn to associate fearful faces with images of snakes and spiders (SN: 9/26/09, p. 11). “There is now a coherent argument that infants are biologically prepared in at least two sensory systems to learn quickly which evolutionarily relevant objects to fear,” Rakison says. © Society for Science & the Public 2000 - 2013
Two pioneers in the study of neural signaling and three researchers responsible for modern cochlear implants are winners of The Albert and Mary Lasker Foundation’s annual prize, announced today. The prestigious award honoring contributions in the medical sciences is often seen as a hint at future Nobel contenders. The prizes for basic and clinical research each carry a $250,000 honorarium. Richard Scheller of the biotech company Genentech and Thomas Südhof of Stanford University in Palo Alto, California, got their basic research Laskers for discovering the mechanisms behind rapid the release of neurotransmitters—the brain’s chemical messengers—into the space between neurons. This process underlies all communication among brain cells, and yet it was “a black box” before Scheller and Südhof’s work, says their colleague Robert Malenka, a synaptic physiologist at Stanford. The two worked independently in the late 1980s to identify individual proteins that mediate the process, and their development of genetically altered mice lacking these proteins was “an ambitious and high-risk approach,” Malenka says. Although “they weren’t setting out to understand any sort of disease,” their discoveries have helped unravel the genetic basis for neurological disorders such as Parkinson’s disease. This year’s clinical research prizes went to Graeme Clark, Ingeborg Hochmair, and Blake Wilson for their work to restore hearing to the deaf. In the 1970s, Hochmair and Clark of the cochlear implant company MED-EL in Innsbruck, Austria, and the University of Melbourne, respectively, were the first to insert multiple electrodes into the human cochlea to stimulate nerves that respond to different frequencies of sound. © 2012 American Association for the Advancement of Science
by Jon White Ever tried beetroot custard? Probably not, but your brain can imagine how it might taste by reactivating old memories in a new pattern. Helen Barron and her colleagues at University College London and Oxford University wondered if our brains combine existing memories to help us decide whether to try something new. So the team used an fMRI scanner to look at the brains of 19 volunteers who were asked to remember specific foods they had tried. Each volunteer was then given a menu of 13 unusual food combinations – including beetroot custard, tea jelly, and coffee yoghurt – and asked to imagine how good or bad they would taste, and whether or not they would eat them. "Tea jelly was popular," says Barron. "Beetroot custard not so much." When each volunteer imagined a new combination, they showed brain activity associated with each of the known ingredients at the same time. It is the first evidence to suggest that we use memory combination to make decisions, says Barron. Journal reference: Nature Neuroscience, doi: 10.1038/nn.3515 © Copyright Reed Business Information Ltd.
By Josh Shaffer DURHAM It’s not often that the high-minded world of neuroscience collides with the corny, old-fashioned art of ventriloquism. One depends on dummies; the other excludes them. But a Duke University study uses puppet-based comedy to demonstrate the complicated inner workings of the brain and shows what every ventriloquist knows: The eye is more convincing than the ear. The study, which appears in the journal PLOS ONE, seeks to explain how the brain combines information coming from two different senses. How, asks Duke psychology and neuroscience professor Jennifer Groh, does the brain determine where a sound is coming from? In your eyes, the retina takes a snapshot, she said. It makes a topographic image of what’s in front of you. But the ears have nothing concrete to go on. They have to rely on how loud the sound is, how far away and from what direction. That’s where a ventriloquist comes in, providing a model for this problem. With a puppet, the noise and the movement are coming from different places. So how does the brain fix this and choose where to look? Duke researchers tested their hypotheses on 11 people and two monkeys, placing them in a soundproof booth.
By Tamar Haspel, American eaters love a good villain. Diets that focus on one clear bad guy have gotten traction even as the bad guy has changed: fat, carbohydrates, animal products, cooked food, gluten. And now Robert Lustig, a pediatric endocrinologist at the University of California at San Francisco, is adding sugar to the list. His book “ Fat Chance: Beating the Odds Against Sugar, Processed Food, Obesity, and Disease ” makes the case that sugar is almost single-handedly responsible for Americans’ excess weight and the illnesses that go with it. “Sugar is the biggest perpetrator of our current health crisis,” says Lustig, blaming it for not just obesity and diabetes but also for insulin resistance, cardiovascular disease, stroke, even cancer. “Sugar is a toxin,” he says. “Pure and simple.” His target is one particular sugar: fructose, familiar for its role in making fruit sweet. Fruit, though, is not the problem; the natural sugar in whole foods, which generally comes in small quantities, is blameless. The fructose in question is in sweeteners — table sugar, high-fructose corn syrup, maple syrup, honey and others — which are all composed of the simple sugars fructose and glucose, in about equal proportions. Although glucose can be metabolized by every cell in the body, fructose is metabolized almost entirely by the liver. There it can result in the generation of free radicals ( damaged cells that can damage other cells) and uric acid ( which can lead to kidney disease or gout ), and it can kick off a process called de novo lipogenesis, which generates fats that can find their way into the bloodstream or be deposited on the liver itself. These byproducts are linked to obesity, insulin resistance and the group of risk factors linked to diabetes, heart disease and stroke. (Lustig gives a detailed explanation of fructose metabolism in a well-viewed YouTube video called “Sugar: The Bitter Truth.”) © 1996-2013 The Washington Post
By Caitlin Kirkwood Do NOT EAT the chemicals. It is the #1 laboratory safety rule young scientists learn to never break and for good reason; it keeps lab citizens alive and unscathed. However, if it hadn’t been for the careless, rule-breaking habits of a few rowdy scientists ingesting their experiments, many artificial sweeteners may never have been discovered. Perhaps the strangest anecdote for artificial sweetener discovery, among tales of inadvertent finger-licking and smoking, is that of graduate student Shashikant Phadnis who misheard instructions from his advisor to ‘test’ a compound and instead tasted it. Rather than keeling over, he identified the sweet taste of sucralose, the artificial sweetener commonly known today as Splenda. Artificial sweeteners like Splenda, Sweet’N Low, and Equal provide a sweet taste without the calories. Around World War II, in response to a sugar shortage and evolving cultural views of beauty, the target consumer group for noncaloric sweetener manufacturers shifted from primarily diabetics to anyone in the general public wishing to reduce sugar intake and lose weight. Foods containing artificial sweeteners changed their labels. Instead of cautioning ‘only for consumption by those who must restrict sugar intake’, they read for those who ‘desire to restrict’ sugar. Today, the country is in the middle of a massive debate about the health implications of artificial sweeteners and whether they could be linked to obesity, cancer, and Alzheimer disease. It’s a good conversation to have because noncaloric sweeteners are consumed regularly in chewing gums, frozen dinners, yogurts, vitamins, baby food, and particularly in diet sodas. © 2013 Scientific American
Inner-ear problems could be a cause of hyperactive behaviour, research suggests. A study on mice, published in Science, said such problems caused changes in the brain that led to hyperactivity. It could lead to the development of new targets for behaviour disorder treatments, the US team says. A UK expert said the study's findings were "intriguing" and should be investigated further. Behavioural problems such as ADHD are usually thought to originate in the brain. But scientists have observed that children and teenagers with inner-ear disorders - especially those that affect hearing and balance - often have behavioural problems. However, no causal link has been found. The researchers in this study suggest inner-ear disorders lead to problems in the brain which then also affect behaviour. The team from the Albert Einstein College of Medicine of Yeshiva University in New York noticed some mice in the lab were particularly active - constantly chasing their tails. They were found to be profoundly deaf and have disorders of the inner ear - of both the cochlea, which is responsible for hearing, and the vestibular system, which is responsible for balance. The researchers found a mutation in the Slc12a2 gene, also found in humans. Blocking the gene's activity in the inner ears of healthy mice caused them to become increasingly active. BBC © 2013
By Bruce Bower Strange things happen when bad singers perform in public. Comedienne Roseanne Barr was widely vilified in 1990 after she screeched the national anthem at a major league baseball game. College student William Hung earned worldwide fame and a recording contract in 2004 with a tuneless version of Ricky Martin’s hit song “She Bangs” on American Idol. Several singers at karaoke bars in the Philippines have been shot to death by offended spectators for mangling the melody of Frank Sinatra’s “My Way.” For all the passion evoked by pitch-impaired vocalists, surprisingly little is known about why some people are cringe-worthy crooners. But now a rapidly growing field of research is beginning to untangle the mechanics of off-key singing. The new results may improve scientists’ understanding of how musical abilities develop and help create a toolbox of teaching strategies for aspiring vocalists. Glimpses are also emerging into what counts as “in tune” to the mind’s ear. It seems that listeners are more likely to label stray notes as in tune when those notes are sung as opposed to played on a violin. Running through this new wave of investigations is a basic theme: There is one way to carry a tune and many ways to fumble it. “It’s kind of amazing that any of us can vocally control pitch enough to sing well,” says psychologist Peter Pfordresher of the University at Buffalo, New York. Still, only about 10 percent of adults sing poorly, several reports suggest (although some researchers regard that figure as an underestimate). Some of those tune-challenged crooners have tone deafness, a condition called amusia, which afflicts about 4 percent of the population. Genetic and brain traits render these folks unable to tell different musical notes apart or to recognize a tune as common as “Happy Birthday.” Amusia often — but curiously, not always — results in inept singing. Preliminary evidence suggests that tone-deaf individuals register pitch changes unconsciously, although they can’t consciously decide whether one pitch differs from another. © Society for Science & the Public 2000 - 2013
Link ID: 18612 - Posted: 09.07.2013
Elizabeth Pennisi Dolphins and bats don't have much in common, but they share a superpower: Both hunt their prey by emitting high-pitched sounds and listening for the echoes. Now, a study shows that this ability arose independently in each group of mammals from the same genetic mutations. The work suggests that evolution sometimes arrives at new traits through the same sequence of steps, even in very different animals. The research also implies that this convergent evolution is common—and hidden—within genomes, potentially complicating the task of deciphering some evolutionary relationships between organisms. Nature is full of examples of convergent evolution, wherein very distantly related organisms wind up looking alike or having similar skills and traits: Birds, bats, and insects all have wings, for example. Biologists have assumed that these novelties were devised, on a genetic level, in fundamentally different ways. That was also the case for two kinds of bats and toothed whales, a group that includes dolphins and certain whales, that have converged on a specialized hunting strategy called echolocation. Until recently, biologists had thought that different genes drove each instance of echolocation and that the relevant proteins could change in innumerable ways to take on new functions. But in 2010, Stephen Rossiter, an evolutionary biologist at Queen Mary, University of London, and his colleagues determined that both types of echolocating bats, as well as dolphins, had the same mutations in a particular protein called prestin, which affects the sensitivity of hearing. Looking at other genes known to be involved in hearing, they and other researchers found several others whose proteins were similarly changed in these mammals. © 2012 American Association for the Advancement of Science
by Jennifer Viegas Goldfish not only listen to music, but they also can distinguish one composer from another, a new study finds. The paper adds to the growing body of evidence that many different animals understand music. For the study, published in the journal Behavioural Processes, Shinozuka and colleagues Haruka Ono and Shigeru Watanabe played two pieces of classical music near goldfish in a tank. The pieces were Toccata and Fugue in D minor by Johann Sebastian Bach and The Rite of Spring by Igor Stravinsky. The scientists trained the fish to gnaw on a little bead hanging on a filament in the water. Half of the fish were trained with food to gnaw whenever Bach played and the other half were taught to gnaw whenever Stravinsky music was on. The goldfish aced the test, easily distinguishing the two composers and getting a belly full of food in the process. The fish were more interested in the vittles than the music, but earlier studies on pigeons and songbirds suggest that Bach is the preferred choice, at least for birds. “These pieces can be classified as classical (Bach) and modern (Stravinsky) music,” Shinozuka explained. “Previously we demonstrated that Java sparrows preferred classical over modern music. Also, we demonstrated Java sparrows could discriminate between consonance and dissonance.” © 2013 Discovery Communications, LLC.
by Michael Marshall Life is tough when you're small. It's not just about getting trodden on by bigger animals. Some of the tiniest creatures struggle to make their bodies work properly. This leads to problems that us great galumphing humans will never experience. For instance, the smallest frogs are prone to drying out because water evaporates so quickly from their skin. Miniature animals can't have many offspring, because there is no room in their bodies to grow them. One tiny spider has even had to let its brain spill into its legs, because its head is too small to accommodate it. Gardiner's Seychelles frog is one of the smallest vertebrates known to exist, at just 11 millimetres long. Its tiny head is missing parts of its ears, which means it shouldn't be able to hear anything. It can, though, and that is thanks to its big mouth. One of only four species in the genus Sechellophryne, Gardiner's Seychelles frog is a true rarity. It is confined to a few square kilometres of two islands in the Seychelles, and even if you visit its habitat you're unlikely to see it. That's because the frog spends most of its time in moist leaf litter, so that it doesn't dry out. It eats tiny insects and other invertebrates. When it comes to hearing, it is sadly under-equipped. Unlike most frogs, it doesn't have an external eardrum. Inside its head, it does have the amphibian equivalent of a cochlea, which is the bit that actually detects sounds. But it doesn't have a middle ear to transmit the sound to the cochlea, and is also missing a bone called the columella that would normally help carry the sound. © Copyright Reed Business Information Ltd.
by Colin Barras Familiarity may breed contempt, and it also makes it easier to ignore our nearest and dearest. The human brain has an uncanny ability to focus on one voice in a sea of chatterSpeaker, for example, at a party, but exactly how it does so is still up for debate. "In the past, people have looked at the acoustic characteristics that enable the brain to do this," says Ingrid Johnsrude at Queen's University in Kingston, Ontario, Canada. "Things like differences in voice pitch or its timbre." Johnsrude and her colleagues wondered if the familiarity of the voice also plays a role. Can people focus on one voice in a crowd more effectively if it belongs to a close relation? And is a familiar voice more easily ignored if we want to listen to someone else? To find out, the team recruited 23 married couples. Each had been married and living together for at least 18 years. Individuals were played two sentences simultaneously and asked to report back details about one of them, such as the colour and number mentioned. They did this correctly 80 per cent of the time when their spouse spoke the target sentence and a stranger spoke the decoy sentence. If strangers spoke both, the success rate dropped to 65 per cent. © Copyright Reed Business Information Ltd
By Dina Fine Maron Using sensors tucked inside the ears of live gerbils, researchers from Columbia University are providing critical insights into how the ear processes sound. In particular, the researchers have uncovered new evidence on how the cochlea, a coiled portion of the inner ear, processes and amplifies sound. The findings could lay the initial building blocks for better hearing aids and implants. The research could also help settle a long-simmering debate: Do the inner workings of the ear function somewhat passively with sound waves traveling into the cochlea, bouncing along sensory tissue, and slowing as they encounter resistance until they are boosted and processed into sound? Or does the cochlea actively amplify sound waves? The study, published in Biophysical Journal, suggests the latter is the case. The team, led by Elizabeth Olson, a biomedical engineer of Columbia University, used sensors that simultaneously measured small pressure fluctuations and cell-generated voltages within the ear. The sensors allowed the researchers to pick up phase shifts—a change in the alignment of the vibrations of the sound waves within the ear—suggesting that some part of the ear was amplifying sound. What causes that phase shift is still unclear although the researchers think the power behind the phase shift comes from the outer hair cells. Apparently the hair cells’ movement serves to localize and sharpen the frequency region of amplification. The researchers wrote that the mechanism appears to be akin to a child swinging on the playground. If somebody pushes a swing just once, the oscillations will eventually die out. If a child pumps her legs at certain times, however, it will put energy into the oscillations—that is power amplification at work. © 2013 Scientific American
Link ID: 18544 - Posted: 08.22.2013
By C. CLAIBORNE RAY A. A widely discussed 2006 study of transit noise in New York City, measuring noise on buses as well as in subway cars and on platforms, was described at the time as the first such formal study published since the 1930s. Done by scientists at the Mailman School of Public Health at Columbia University and published in The Journal of Urban Health, the study concluded that noise levels at subway and bus stops could easily exceed recognized public health recommendations and had the potential to damage hearing, given sufficient exposure. For example, guidelines from the Environmental Protection Agency and the World Health Organization set a limit of 45 minutes’ exposure to 85 decibels, the mean noise level measured on subway platforms. And nearly 60 percent of the platform measurements exceeded that level. The maximum noise levels inside subway cars were even higher than those on the platforms, with one-fifth exceeding 100 decibels and more than two-thirds exceeding 90 decibels. The study recommended properly fitted earplugs and earmuff-type protectors in loud transit environments, saying they could cut noise levels significantly at the eardrum. And it warned that personal listening devices only increased the total noise and risk. © 2013 The New York Times Company
Link ID: 18495 - Posted: 08.13.2013
Josh Howgego When it comes to our sense of smell, we are all experiencing the world in very different ways. Scientists already know that humans' sensitivity to smelly molecules varies considerably from person to person (see: 'Soapy taste of coriander linked to genetic variants'). But evidence that genetic variations — as opposed to habit, culture or other factors — underlie these differences has been hard to come by. Geneticist Richard Newcomb of the New Zealand institute for Plant and Food Research in Auckland and his colleagues searched for olfactory genes by testing 187 people’s sensitivity to ten chemicals found in everyday food, including the molecules that give distinctive smells to blue cheese, apples and violets. They found that, as expected, the smelling abilities of their subjects varied. The team then sequenced the subjects’ genomes and looked for differences that could predict people’s ability to detect each chemical through smell. For four of the ten chemicals, the researchers identified clusters of genes that convincingly predicted smelling ability, as they report today in Current Biology1. The study could not conclude whether similar genetic associations exist for the other six compounds, or whether factors other than genes play a role in those cases. Previously, only five regions of the genome had been shown to affect olfactory ability when they undergo mutations, so Newcomb’s study has nearly doubled the number of genetic associations known to influence smell. And because there is nothing special about the chemicals they studied, Newcomb says that it is logical to think the findings would extend to lots of scents, meaning that people experience the plethora of chemicals surrounding them in endlessly different ways. © 2013 Nature Publishing Group
by Tanya Lewis, LiveScience In waters from Florida to the Caribbean, dolphins are showing up stranded or entangled in fishing gear with an unusual problem: They can't hear. More than half of stranded bottlenose dolphins are deaf, one study suggests. The causes of hearing loss in dolphins aren't always clear, but aging, shipping noise and side effects from antibiotics could play roles. "We're at a stage right now where we're determining the extent of hearing loss [in dolphins], and figuring out all the potential causes," said Judy St. Leger, director of pathology and research at SeaWorld in San Diego. "The better we understand that, the better we have a sense of what we should be doing [about it]." Whether the hearing loss is causing the dolphin strandings -- for instance, by steering the marine mammals in the wrong direction or preventing them from finding food -- is also still an open question. Dolphins are a highly social species. They use echolocation to orient themselves by bouncing high-pitched sound waves off of objects in their environment. They also "speak" to one another in a language of clicks and buzzing sounds. Because hearing is so fundamental to dolphins' survival, losing it can be detrimental. A 2010 study found that more than half of stranded bottlenose dolphins and more than a third of stranded rough-toothed dolphins had severe hearing loss. The animals' hearing impairment may have been a critical factor in their strandings, and all rescued cetaceans should be tested, the researchers said in the study, detailed in the journal PLOS ONE. © 2013 Discovery Communications, LLC.
Researchers have found in mice that supporting cells in the inner ear, once thought to serve only a structural role, can actively help repair damaged sensory hair cells, the functional cells that turn vibrations into the electrical signals that the brain recognizes as sound. The study in the July 25, 2013 online edition of the Journal of Clinical Investigation reveals the rescuing act that supporting cells and a chemical they produce called heat shock protein 70 (HSP70) appear to play in protecting damaged hair cells from death. Finding a way to jumpstart this process in supporting cells offers a potential pathway to prevent hearing loss caused by certain drugs, and possibly by exposure to excess noise. The study was led by scientists at the National Institutes of Health. Over half a million Americans experience hearing loss every year from ototoxic drugs — drugs that can damage hair cells in the inner ear. These include some antibiotics and the chemotherapy drug cisplatin. In addition, about 15 percent of Americans between the ages of 20 and 69 have noise-induced hearing loss, which also results from damage to the sensory hair cells. Once destroyed or damaged by noise or drugs, sensory hair cells in the inner ears of humans don’t grow back or self-repair, unlike the sensory hair cells of other animals such as birds and amphibians. This has made exploring potential pathways to protect or regrow hair cells in humans a major focus of hearing research.
By Emma Tracey BBC News, Ouch An online magazine for the deaf community, Limping Chicken, recently ran an item on how deaf and hearing people sneeze differently. The article by partially deaf journalist Charlie Swinbourne got readers talking - and the cogs started turning at Ouch too. Swinbourne observes that deaf people don't make the "achoo!" sound when they sneeze, while hearing people seem to do it all the time - in fact, he put it in his humorous list, The Top 10 Annoying Habits of Hearing People. Nor is "achoo" universal - it's what English-speaking sneezers say. The French sneeze "atchoum". In Japan, it's "hakashun" and in the Philippines, they say "ha-ching". Inserting words into sneezes - and our responses such as "bless you" - are cultural habits we pick up along the way. So it's not surprising that British deaf people, particularly users of sign language, don't think to add the English word "achoo" to this most natural of actions. For deaf people, "a sneeze is what it should be... something that just happens", says Swinbourne in his article. He even attempts to describe what an achoo-free deaf sneeze sounds like: "[There is] a heavy breath as the deep pre-sneeze breath is taken, then a sharper, faster sound of air being released." Very little deaf-sneeze research exists, but a study has been done on deaf people and their laughter. BBC © 2013
Link ID: 18355 - Posted: 07.08.2013