Chapter 9. Hearing, Vestibular Perception, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By PETER ANDREY SMITH Sweet, salty, sour and bitter — every schoolchild knows these are the building blocks of taste. Our delight in every scrumptious bonbon, every sizzling hot dog, derives in part from the tongue’s ability to recognize and signal just four types of taste. But are there really just four? Over the last decade, research challenging the notion has been piling up. Today, savory, also called umami, is widely recognized as a basic taste, the fifth. And now other candidates, perhaps as many as 10 or 20, are jockeying for entry into this exclusive club. “What started off as a challenge to the pantheon of basic tastes has now opened up, so that the whole question is whether taste is even limited to a very small number of primaries,” said Richard D. Mattes, a professor of nutrition science at Purdue University. Taste plays an intrinsic role as a chemical-sensing system for helping us find what is nutritious (stimulatory) and as a defense against what is poison (aversive). When we put food in our mouths, chemicals slip over taste buds planted into the tongue and palate. As they respond, we are thrilled or repulsed by what we’re eating. But the body’s reaction may not always be a conscious one. In the late 1980s, in a windowless laboratory at Brooklyn College, the psychologist Anthony Sclafani was investigating the attractive power of sweets. His lab rats loved Polycose, a maltodextrin powder, even preferring it to sugar. That was puzzling for two reasons: Maltodextrin is rarely found in plants that rats might feed on naturally, and when human subjects tried it, the stuff had no obvious taste. More than a decade later, a team of exercise scientists discovered that maltodextrin improved athletic performance — even when the tasteless additive was swished around in the mouth and spit back out. Our tongues report nothing; our brains, it seems, sense the incoming energy. © 2014 The New York Times Company
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19867 - Posted: 07.22.2014
|By Roni Jacobson Last week, nine-year-old Hally Yust died after contracting a rare brain-eating amoeba infection while swimming near her family’s home in Kansas. The organism responsible, Naegleria fowleri, dwells in warm freshwater lakes and rivers and usually targets children and young adults. Once in the brain it causes a swelling called primary meningoencephalitis. The infection is almost universally fatal: it kills more than 97 percent of its victims within days. Although deadly, infections are exceedingly uncommon—there were only 34 reported in the U.S. during the past 10 years—but evidence suggests they may be increasing. Prior to 2010 more than half of cases came from Florida, Texas and other southern states. Since then, however, infections have popped up as far north as Minnesota. “We’re seeing it in states where we hadn’t seen cases before,” says Jennifer Cope, an epidemiologist and expert in amoeba infections at the U.S. Centers for Disease Control and Prevention. The expanding range of Naegleria infections could potentially be related to climate change, she adds, as the organism thrives in warmer temperatures. “It’s something we’re definitely keeping an eye on.” Still, “when it comes to Naegleria there’s a lot we don’t know,” Cope says—including why it chooses its victims. The amoeba has strategies to evade the immune system, and treatment options are meager partly because of how fast the infection progresses. But research suggests that the infectioncan be stopped if it is caught soon enough. So what happens during an N. fowleri infection? © 2014 Scientific American
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19855 - Posted: 07.21.2014
By Virginia Morell Many moth species sing courtship songs, and until now, scientists knew of only two types of such melodies. Some species imitate attacking bats, causing a female to freeze in place, whereas others croon tunes that directly woo the ladies. But the male yellow peach moth (Conogethes punctiferalis, pictured) belts out a combination song, scientists report online today in the Proceedings of the Royal Society B. These tiny troubadours, which are found throughout Asia, emit ultrasonic refrains composed of short and long pulses by contracting their abdominal tymbals, sound-producing membranes. (Listen to a male’s courtship song above.) The short pulses, the scientists say, are similar to the hunting calls of insectivorous horseshoe bats. However, unlike other moth species, these males aren’t directing the batlike tunes at females, but rather at rival males. Using playback experiments, the scientists showed that a male drives away competitors with the short pulses of his ditty, while inducing a female to mate with the long note. Indeed, a receptive virgin female moth (1 to 3 days old) typically raises her wings after hearing this part of the male’s song—a sign that she accepts the male, the scientists say. It is thus the first moth species known to have a dual-purpose melody. © 2014 American Association for the Advancement of Science
Philip Ball Lead guitarists usually get to play the flashy solos while the bass player gets only to plod to the beat. But this seeming injustice could have been determined by the physiology of hearing. Research published today in the Proceedings of the National Academy of Sciences1 suggests that people’s perception of timing in music is more acute for lower-pitched notes. Psychologist Laurel Trainor of McMaster University in Hamilton, Canada, and her colleagues say that their findings explain why in the music of many cultures the rhythm is carried by low-pitched instruments while the melody tends to be taken by the highest pitched. This is as true for the low-pitched percussive rhythms of Indian classical music and Indonesian gamelan as it is for the walking double bass of a jazz ensemble or the left-hand part of a Mozart piano sonata. Earlier studies2 have shown that people have better pitch discrimination for higher notes — a reason, perhaps, that saxophonists and lead guitarists often have solos at a squealing register. It now seems that rhythm works best at the other end of the scale. Trainor and colleagues used the technique of electroencephalography (EEG) — electrical sensors placed on the scalp — to monitor the brain signals of people listening to streams of two simultaneous piano notes, one high-pitched and the other low-pitched, at equally spaced time intervals. Occasionally, one of the two notes was played slightly earlier, by just 50 milliseconds. The researchers studied the EEG recordings for signs that the listeners had noticed. © 2014 Nature Publishing Group,
Link ID: 19776 - Posted: 07.01.2014
Nicola Davis The old adage that we eat with our eyes appears to be correct, according to research that suggests diners rate an artistically arranged meal as more tasty – and are prepared to pay more for it. The team at Oxford University tested the idea by gauging the reactions of diners to food presented in different ways. Inspired by Wassily Kandinsky's "Painting Number 201" Franco-Columbian chef and one of the authors of the study, Charles Michel, designed a salad resembling the abstract artwork to explore how the presentation of food affects the dining experience. "A number of chefs now are realising that they are being judged by how their foods photograph – be it in the fancy cookbooks [or], more often than not, when diners instagram their friends," explains Professor Charles Spence, experimental psychologist at the University of Oxford and a co-author of the study. Thirty men and 30 women were each presented with one of three salads containing identical ingredients, arranged either to resemble the Kandinsky painting, a regular tossed salad, or a "neat" formation where each component was spaced away from the others. Seated alone at a table mimicking a restaurant setting, and unaware that other versions of the salad were on offer, each participant was given two questionnaires asking them to rate various aspects of the dish on a 10-point scale, before and after tucking into the salad. Before participants sampled their plateful, the Kandinsky-inspired dish was rated higher for complexity, artistic presentation and general liking. Participants were prepared to pay twice as much for the meal as for either the regular or "neat arrangements". © 2014 Guardian News and Media Limited
by Frank Swain WHEN it comes to personal electronics, it's difficult to imagine iPhones and hearing aids in the same sentence. I use both and know that hearing aids have a well-deserved reputation as deeply uncool lumps of beige plastic worn mainly by the elderly. Apple, on the other hand, is the epitome of cool consumer electronics. But the two are getting a lot closer. The first "Made for iPhone" hearing aids have arrived, allowing users to stream audio and data between smartphones and the device. It means hearing aids might soon be desirable, even to those who don't need them. A Bluetooth wireless protocol developed by Apple last year lets the prostheses connect directly to Apple devices, streaming audio and data while using a fraction of the power consumption of conventional Bluetooth. LiNX, made by ReSound (pictured), and Halo hearing aids made by Starkey – both international firms – use the iPhone as a platform to offer users new features and added control over their hearing aids. "The main advantage of Bluetooth is that the devices are talking to each other, it's not just one way," says David Nygren, UK general manager of ReSound. This is useful as hearing aids have long suffered from a restricted user interface – there's not much room for buttons on a device the size of a kidney bean. This is a major challenge for hearing-aid users, because different environments require different audio settings. Some devices come with preset programmes, while others adjust automatically to what their programming suggests is the best configuration. This is difficult to get right, and often devices calibrated in the audiologist's clinic fall short in the real world. © Copyright Reed Business Information Ltd.
Link ID: 19757 - Posted: 06.23.2014
By Ian Randall The human tongue may have a sixth sense—and no, it doesn’t have anything to do with seeing ghosts. Researchers have found that in addition to recognizing sweet, sour, salty, savory, and bitter tastes, our tongues can also pick up on carbohydrates, the nutrients that break down into sugar and form our main source of energy. Past studies have shown that some rodents can distinguish between sugars of different energy densities, while others can still tell carbohydrate and protein solutions apart even when their ability to taste sweetness is lost. A similar ability has been proposed in humans, with research showing that merely having carbohydrates in your mouth can improve physical performance. How this works, however, has been unclear. In the new study, to be published in Appetite, the researchers asked participants to squeeze a sensor held between their right index finger and thumb when shown a visual cue. At the same time, the participants’ tongues were rinsed with one of three different fluids. The first two were artificially sweetened—to identical tastes—but with only one containing carbohydrate; the third, a control, was neither sweet nor carb-loaded. When the carbohydrate solution was used, the researchers observed a 30% increase in activity for the brain areas that control movement and vision. This reaction, they propose, is caused by our mouths reporting that additional energy in the form of carbs is coming. The finding may explain both why diet products are often viewed as not being as satisfying as their real counterparts and why carbohydrate-loaded drinks seem to immediately perk up athletes—even before their bodies can convert the carbs to energy. Learning more about how this “carbohydrate sense” works could lead to the development of artificially sweetened foods, the researchers propose, “as hedonistically rewarding as the real thing.” © 2014 American Association for the Advancement of Science
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19700 - Posted: 06.06.2014
by Catherine de Lange Could your ideal diet be written in your genes? That's the promise of nutrigenomics, which looks for genetic differences in the way people's bodies process food so that diets can be tailored accordingly. The field had a rocky start after companies overhyped its potential, but with advances in genetic sequencing, and a slew of new studies, the concept is in for a reboot. Last week, Nicola Pirastu at the University of Trieste, Italy, and his colleagues told the European Society of Human Genetics meeting in Milan that diets tailored to genes that are related to metabolism can help people lose weight. The team used the results of a genetic test to design specific diets for 100 obese people that also provided them with 600 fewer calories than usual. A control group was placed on a 600-calorie deficit, untailored diet. After two years, both groups had lost weight, but those in the nutrigenetic group lost 33 per cent more. They also took only a year to lose as much weight as the group on the untailored diet lost in two years. If this is shown to work in bigger, randomised trials, it would be fantastic, says Ana Valdes, a genetic epidemiologist at the University of Nottingham, UK. Some preliminary information will soon be available from Europe's Food4Me project. It is a study of 1200 people across several countries who were given either standard nutrition advice, or a similarly genetically tailored diet. "It's testing whether we can get bigger changes in diet using a personalised approach, and part of that is using genetic information," says team member John Mathers, director of the Human Nutrition Research Centre at Newcastle University, UK. © Copyright Reed Business Information Ltd.
|By Christie Nicholson Conventional wisdom once had it that each brain region is responsible for a specific task. And so we have the motor cortex for handling movements, and the visual cortex, for processing sight. And scientists thought that such regions remained fixed for those tasks beyond the age of three. But within the past decade researchers have realized that some brain regions can pinch hit for other regions, for example, after a damaging stroke. And now new research finds that the visual cortex is constantly doing double duty—it has a role in processing not just sight, but sound. When we hear [siren sound], we see a siren. In the study, scientists scanned the brains of blindfolded participants as the subjects listened to three sounds: [audio of birds, audio of traffic, audio of a talking crowd.] And the scientists could tell what specific sounds the subjects were hearing just by analyzing the brain activity in the visual cortex. [Petra Vetter, Fraser W. Smith and Lars Muckli, Decoding Sound and Imagery Content in Early Visual Cortex, in Current Biology] The next step is to determine why the visual cortex is horning in on the audio action. The researchers think the additional role conferred an evolutionary advantage: having a visual system primed by sound to see the source of that sound could have given humans an extra step in the race for survival. © 2014 Scientific American
Tastes are a privilege. The oral sensations not only satisfy foodies, but also on a primal level, protect animals from toxic substances. Yet cetaceans—whales and dolphins—may lack this crucial ability, according to a new study. Mutations in a cetacean ancestor obliterated their basic machinery for four of the five primary tastes, making them the first group of mammals to have lost the majority of this sensory system. The five primary tastes are sweet, bitter, umami (savory), sour, and salty. These flavors are recognized by taste receptors—proteins that coat neurons embedded in the tongue. For the most part, taste receptor genes present across all vertebrates. Except, it seems, cetaceans. Researchers uncovered a massive loss of taste receptors in these animals by screening the genomes of 15 species. The investigation spanned the two major lineages of cetaceans: Krill-loving baleen whales—such as bowheads and minkes—were surveyed along with those with teeth, like bottlenose dolphins and sperm whales. The taste genes weren’t gone per se, but were irreparably damaged by mutations, the team reports online this month in Genome Biology and Evolution. Genes encode proteins, which in turn execute certain functions in cells. Certain errors in the code can derail protein production—at which point the gene becomes a “pseudogene” or a lingering shell of a trait forgotten. Identical pseudogene corpses were discovered across the different cetacean species for sweet, bitter, umami, and sour taste receptors. Salty tastes were the only exception. © 2014 American Association for the Advancement of Science.
Jessica Morrison Interference from electronics and AM radio signals can disrupt the internal magnetic compasses of migratory birds, researchers report today in Nature1. The work raises the possibility that cities have significant effects on bird migration patterns. Decades of experiments have shown that migratory birds can orient themselves on migration paths using internal compasses guided by Earth's magnetic field. But until now, there has been little evidence that electromagnetic radiation created by humans affects the process. Like most biologists studying magnetoreception, report co-author Henrik Mouritsen used to work at rural field sites far from cities teeming with electromagnetic noise. But in 2002, he moved to the University of Oldenburg, in a German city of around 160,000 people. As part of work to identify the part of the brain in which compass information is processed, he kept migratory European robins (Erithacus rubecula) inside wooden huts — a standard procedure that allows researchers to investigate magnetic navigation while being sure that the birds are not getting cues from the Sun or stars. But he found that on the city campus, the birds could not orient themselves in their proper migratory direction. “I tried all kinds of stuff to make it work, and I couldn’t make it work,” Mouritsen says, “until one day we screened the wooden hut with aluminium.” Mouritsen and his colleagues covered the huts with aluminium plates and electrically grounded them to cut out electromagnetic noise in frequencies ranging from 50 kilohertz to 5 megahertz — which includes the range used for AM radio transmissions. The shielding reduced the intensity of the noise by about two orders of magnitude. Under those conditions, the birds were able to orient themselves. © 2014 Nature Publishing Group,
Keyword: Animal Migration
Link ID: 19590 - Posted: 05.08.2014
Humans stink, and it’s wonderful. A few whiffs of a pillow in the morning can revive memories of a lover. The sweaty stench of a gym puts us in the mood to exercise. Odors define us, yet the scientific zeitgeist is that we don’t communicate through pheromones—scents that influence behavior. A new study challenges that thinking, finding that scent can change whether we think someone is masculine or feminine. Humans carry more secretion and sweat glands in their skin than any other primate. Yet 70% of people lack a vomeronasal organ, a crescent-shaped bundle of neurons at the base of each nostril that allows a variety of species—from reptiles to nonprimate mammals—to pick up on pheromones. (If you’ve ever seen your cat huff something, he’s using this organ.) Still, scientists have continued to hunt for examples of pheromones that humans might sense. Two strong candidates are androstadienone (andro) and estratetraenol (estra). Men secrete andro in their sweat and semen, while estra is primarily found in female urine. Researchers have found hints that both trigger arousal—by improving moods and switching on the brain’s “urge” center, the hypothalamus—in the opposite sex. Yet to be true pheromones, these chemicals must shape how people view different genders. That’s exactly what they do, researchers from the Chinese Academy of Sciences in Beijing report online today in Current Biology. The team split men and women into groups of 24 and then had them watch virtual simulations of a human figure walking (see video). The head, pelvis, and major joints in each figure were replaced with moving dots. Subjects in prior studies had ranked the videos as being feminine or masculine. For instance, watch the figure on the far left, which was gauged as having a quintessential female strut. Notice a distinctive swagger in the “hip” dots and how they contrast with the flat gait of the “male” prototype all the way to the right. © 2014 American Association for the Advancement of Science
by Bethany Brookshire When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner. Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing. In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says. © Society for Science & the Public 2000 - 2013.
Regina Nuzzo Gene therapy delivered to the inner ear can help shrivelled auditory nerves to regrow — and in turn, improve bionic ear technology, researchers report today in Science Translational Medicine1. The work, conducted in guinea pigs, suggests a possible avenue for developing a new generation of hearing prosthetics that more closely mimics the richness and acuity of natural hearing. Sound travels from its source to ears, and eventually to the brain, through a chain of biological translations that convert air vibrations to nerve impulses. When hearing loss occurs, it’s usually because crucial links near the end of this chain — between the ear’s cochlear cells and the auditory nerve — are destroyed. Cochlear implants are designed to bridge this missing link in people with profound deafness by implanting an array of tiny electrodes that stimulate the auditory nerve. Although cochlear implants often work well in quiet situations, people who have them still struggle to understand music or follow conversations amid background noise. After long-term hearing loss, the ends of the auditory nerve bundles are often frayed and withered, so the electrode array implanted in the cochlea must blast a broad, strong signal to try to make a connection, instead of stimulating a more precise array of neurons corresponding to particular frequencies. The result is an ‘aural smearing’ that obliterates fine resolution of sound, akin to forcing a piano player to wear snow mittens or a portrait artist to use finger paints. To try to repair auditory nerve endings and help cochlear implants to send a sharper signal to the brain, researchers turned to gene therapy. Their method took advantage of the electrical impulses delivered by the cochlear-implant hardware, rather than viruses often used to carry genetic material, to temporarily turn inner-ear cells porous. This allowed DNA to slip in, says lead author Jeremy Pinyon, an auditory scientist at the University of New South Wales in Sydney, Australia. © 2014 Nature Publishing Group
By Floyd Skloot, March 27, 2009. I was fine the night before. The little cold I’d had was gone, and I’d had the first good night’s sleep all week. But when I woke up Friday morning at 6:15 and got out of bed, the world was whirling counterclockwise. I knocked against the bookcase, stumbled through the bathroom doorway and landed on my knees in front of the sink. It was as though I’d been tripped by a ghost lurking beside the bed. Even when I was on all fours, the spinning didn’t stop. Lightheaded, reaching for solid support, I made it back to bed and, showing keen analytical insight, told my wife, Beverly, “Something’s wrong.” The only way I could put on my shirt was to kneel on the floor first. I teetered when I rose. Trying to keep my head still, moving only my eyes, I could feel my back and shoulders tightening, forming a shell. Everything was in motion, out of proportion, unstable. I barely made it downstairs for breakfast, clutching the banister, concentrating on each step and, when I finally made it to the kitchen, feeling too aswirl to eat anyway. I didn’t realize it at the time, but those stairs would become my greatest risk during this attack of relentless, intractable vertigo. Vertigo — the feeling that you or your surroundings are spinning — is a symptom, not a disease. You don’t get a diagnosis of vertigo; instead, you present with vertigo, a hallmark of balance dysfunction. Or with dizziness, a more generalized term referring to a range of off-kilter sensations including wooziness, faintness, unsteadiness, spatial disorientation, a feeling akin to swooning. It happens to almost everyone: too much to drink or standing too close to the edge of a roof or working out too hard or getting up too fast. © 1996-2014 The Washington Post
Link ID: 19516 - Posted: 04.22.2014
|By Daisy Yuhas Strange as it may sound, some scientists suspect that the humble armpit could be sending all kinds of signals from casual flirtation to sounding the alarm. That’s because the body’s secretions, some stinky and others below the threshold your nose can detect, may be rife with chemical messages called pheromones. Yet despite half a century of research into these subtle cues, we have yet to find direct evidence of their existence in humans. Humans and other animals have an olfactory system designed to detect and discriminate between thousands of chemical compounds. For more than 50 years, scientists have been aware of the fact that certain insects and animals can release chemical compounds—often as oils or sweat—and that other creatures can detect and respond to these compounds, which allows for a form of silent, purely chemical communication. Although the exact definition has been debated and redefined several times, pheromones are generally recognized as single or small sets of compounds that transmit signals between organisms of the same species. They are typically just one part of the larger potpourri of odorants emitted from an insect or animal, and some pheromones do not have a discernable scent. Since pheromones were first defined in 1959, scientists have found many examples of pheromonal communication. The most striking of these signals elicits an immediate behavioral response. For example, the female silk moth releases a trail of the molecule bombykol, which unerringly draws males from the moment they encounter it. Slower-acting pheromones can affect the recipient’s reproductive physiology, as when the alpha-farnesene molecule in male mouse urine accelerates puberty in young female mice. © 2014 Scientific American
By KATHERINE BOUTON Like almost all newborns in this country, Alex Justh was given a hearing test at birth. He failed, but his parents were told not to worry: He was a month premature and there was mucus in his ears. A month later, an otoacoustic emission test, which measures the response of hair cells in the inner ear, came back normal. Alex was the third son of Lydia Denworth and Mark Justh (pronounced Just), and at first they “reveled at what a sweet and peaceful baby he was,” Ms. Denworth writes in her new book, “I Can Hear You Whisper: An Intimate Journey Through the Science of Sound and Language,” being published this week by Dutton. But Alex began missing developmental milestones. He was slow to sit up, slow to stand, slow to walk. His mother felt a “vague uneasiness” at every delay. He seemed not to respond to questions, the kind one asks a baby: “Can you show me the cow?” she’d ask, reading “Goodnight, Moon.” Nothing. No response. At 18 months Alex unequivocally failed a hearing test, but there was still fluid in his ears, so the doctor recommended a second test. It wasn’t until 2005, when Alex was 2 ½, that they finally realized he had moderate to profound hearing loss in both ears. This is very late to detect deafness in a child; the ideal time is before the first birthday. Alex’s parents took him to Dr. Simon Parisier, an otolaryngologist at New York Eye and Ear Infirmary, who recommended a cochlear implant as soon as possible. “Age 3 marked a critical juncture in the development of language,” Ms. Denworth writes. “I began to truly understand that we were not just talking about Alex’s ears. We were talking about his brain.” © 2014 The New York Times Company
|By Simon Makin Scientists have observed that reading ability scales with socioeconomic status. Yet music might help close the gap, according to Nina Kraus and her colleagues at Northwestern University. Kraus's team tested the auditory abilities of teenagers aged 14 or 15, grouped by socioeconomic status (as indexed by their mother's level of education, a commonly used surrogate measure). The researchers recorded the kids' brain waves with EEG as they listened to a repeated syllable against soft background sound and when they heard nothing. They found that children of mothers with a lower education had noisier, weaker and more variable neural activity in response to sound and greater activity in the absence of sound. The children also scored lower on tests of reading and working memory. Kraus thinks music training is worth investigating as a possible intervention for such auditory deficits. The brains of trained musicians differ from nonmusicians, and they also enjoy a range of auditory advantages, including better speech perception in noise, according to research from Kraus's laboratory. The researchers admit that this finding could be the result of preexisting differences that predispose some people to choose music as a career or hobby, but they point out that some experimental studies show that musical training, whether via one-on-one lessons or in group sessions, enhances people's response to speech. Most recently Kraus's group has shown that these effects may last. Kraus surveyed 44 adults aged 55 to 76 and found that four or more years of musical training in childhood was linked to faster neural responses to speech, even for the older adults who had not picked up an instrument for more than 40 years. © 2014 Scientific American
If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science
Dr Nicola Davis The electronic nose in an instrument that attempts to mimic the human olfactory system. Humans and animals don't identify specific chemicals within odours; what they do is to recognise a smell based on a response pattern. You, as a human, will smell a strawberry and say "that's a strawberry". If you gave this to a traditional analytical piece of equipment, it might tell you what the 60-odd chemicals in the odour were - but that wouldn't tell you that it was a strawberry. How does it work? A traditional electronic nose has an array of chemical sensors, designed either to detect gases or vapours. These sensors are not tuned to a single chemical, but detect families of chemicals - [for example] alcohols. Each one of these sensors is different, so when they are presented to a complex odour formed of many chemicals, each sensor responds differently to that odour. This creates a pattern of sensor responses, which the machine can be taught [to recognise]. Can't we just use dogs? A dog is very, very sensitive. Special research teams work on training dogs to detect cancers as you would do explosives. What you we are trying to do with the electronic nose is create an artificial means of replicating what the dog does. Such machines have the advantage that they don't get tired, will work all day and you only need to feed them electricity. © 2014 Guardian News and Media Limited