Chapter 9. Hearing, Vestibular Perception, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By KATHERINE BOUTON Like almost all newborns in this country, Alex Justh was given a hearing test at birth. He failed, but his parents were told not to worry: He was a month premature and there was mucus in his ears. A month later, an otoacoustic emission test, which measures the response of hair cells in the inner ear, came back normal. Alex was the third son of Lydia Denworth and Mark Justh (pronounced Just), and at first they “reveled at what a sweet and peaceful baby he was,” Ms. Denworth writes in her new book, “I Can Hear You Whisper: An Intimate Journey Through the Science of Sound and Language,” being published this week by Dutton. But Alex began missing developmental milestones. He was slow to sit up, slow to stand, slow to walk. His mother felt a “vague uneasiness” at every delay. He seemed not to respond to questions, the kind one asks a baby: “Can you show me the cow?” she’d ask, reading “Goodnight, Moon.” Nothing. No response. At 18 months Alex unequivocally failed a hearing test, but there was still fluid in his ears, so the doctor recommended a second test. It wasn’t until 2005, when Alex was 2 ½, that they finally realized he had moderate to profound hearing loss in both ears. This is very late to detect deafness in a child; the ideal time is before the first birthday. Alex’s parents took him to Dr. Simon Parisier, an otolaryngologist at New York Eye and Ear Infirmary, who recommended a cochlear implant as soon as possible. “Age 3 marked a critical juncture in the development of language,” Ms. Denworth writes. “I began to truly understand that we were not just talking about Alex’s ears. We were talking about his brain.” © 2014 The New York Times Company
|By Simon Makin Scientists have observed that reading ability scales with socioeconomic status. Yet music might help close the gap, according to Nina Kraus and her colleagues at Northwestern University. Kraus's team tested the auditory abilities of teenagers aged 14 or 15, grouped by socioeconomic status (as indexed by their mother's level of education, a commonly used surrogate measure). The researchers recorded the kids' brain waves with EEG as they listened to a repeated syllable against soft background sound and when they heard nothing. They found that children of mothers with a lower education had noisier, weaker and more variable neural activity in response to sound and greater activity in the absence of sound. The children also scored lower on tests of reading and working memory. Kraus thinks music training is worth investigating as a possible intervention for such auditory deficits. The brains of trained musicians differ from nonmusicians, and they also enjoy a range of auditory advantages, including better speech perception in noise, according to research from Kraus's laboratory. The researchers admit that this finding could be the result of preexisting differences that predispose some people to choose music as a career or hobby, but they point out that some experimental studies show that musical training, whether via one-on-one lessons or in group sessions, enhances people's response to speech. Most recently Kraus's group has shown that these effects may last. Kraus surveyed 44 adults aged 55 to 76 and found that four or more years of musical training in childhood was linked to faster neural responses to speech, even for the older adults who had not picked up an instrument for more than 40 years. © 2014 Scientific American
If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science
Dr Nicola Davis The electronic nose in an instrument that attempts to mimic the human olfactory system. Humans and animals don't identify specific chemicals within odours; what they do is to recognise a smell based on a response pattern. You, as a human, will smell a strawberry and say "that's a strawberry". If you gave this to a traditional analytical piece of equipment, it might tell you what the 60-odd chemicals in the odour were - but that wouldn't tell you that it was a strawberry. How does it work? A traditional electronic nose has an array of chemical sensors, designed either to detect gases or vapours. These sensors are not tuned to a single chemical, but detect families of chemicals - [for example] alcohols. Each one of these sensors is different, so when they are presented to a complex odour formed of many chemicals, each sensor responds differently to that odour. This creates a pattern of sensor responses, which the machine can be taught [to recognise]. Can't we just use dogs? A dog is very, very sensitive. Special research teams work on training dogs to detect cancers as you would do explosives. What you we are trying to do with the electronic nose is create an artificial means of replicating what the dog does. Such machines have the advantage that they don't get tired, will work all day and you only need to feed them electricity. © 2014 Guardian News and Media Limited
Nicola Davis The moment when 40-year old Joanne Milne, who has been deaf since birth, first hears sound is heart-wrenching scene. Amateur footage showing her emotional reaction has taken social media by storm and touched viewers across the world, reinforcing the technological triumph of cochlear implants. It’s a story I have touched on before. Earlier this month I wrote about how cochlear implants changed the lives of the Campbells whose children Alice and Oliver were born with the condition auditory neuropathy spectrum disorder (ANSD). Implants, together with auditory verbal therapy, have allowed them to embrace the hearing world. It was incredibly moving to glimpse the long and difficult journey this family had experienced, and the joy that hearing - a sense so many of us take for granted - can bring. Cochlear implants are not a ‘cure’ for deafness. They make use of electrodes to directly stimulate auditory nerve fibres in the cochlea of the inner ear, creating a sense of sound that is not the same as that which hearing people experience, but nevertheless allows users to perceive speech, develop language and often enjoy music. As an adult Milne, who was born with the rare condition Usher syndrome, is unusual in receiving cochlear implants on both sides. Such bilateral implantation enables users to work out where sounds are coming from, enhances speech perception in bustling environments and means that should something go wrong with one device, the user isn’t cut off from the hearing world. © 2014 Guardian News
Dragonflies are full of surprises. They have six legs, but most can’t walk. Their giant, 30,000-lens eyes can detect ultraviolet light. And though they lack the brain architecture normally required for a sense of smell, a new study finds that dragonflies may use odors to hunt prey. Smelling, as we humans understand it, requires certain hardware. Our noses are packed with olfactory receptors, each of which is tuned to a precise scent molecule. (Indeed, a recent study suggests we can detect a trillion smells.) When one wafts into our nostrils, these receptors send nerve signals to sensory way stations called glomeruli, which pass them along to the brain for interpretation—“Oh, a rose!” Glomeruli are shared by most terrestrial mammals and insects, and until now, scientists believed they represented the only possible route to a sense of smell. Because dragonflies and their close cousins, damselflies, don’t possess glomeruli or any higher order smell centers in their brains, most scientists believed these insects were unable to smell anything at all. Invertebrate biologist Manuela Rebora at the University of Perugia in Italy was not one of them. When her team took a closer look at dragonfly and damselfly antennae with an electron microscope, they spotted tiny bulbs in pits that resembled olfactory sensilla. Like the insect equivalent of a nose, these sensilla house olfactory neurons. When Rebora’s team exposed the suspected sensilla to scents, they emitted nerve pulses, supporting the idea that damselflies and dragonflies perceive odors. © 2014 American Association for the Advancement of Science.
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19406 - Posted: 03.25.2014
By ANNE EISENBERG People who strain to hear conversations in noisy places sometimes shun hearing appliances as telltale signs of aging. But what if those devices looked like wireless phone receivers? Some companies are betting that the high-tech look of a new generation of sound amplifiers will tempt people to try them. The new in-ear amps come with wireless technology and typically cost $300 to $500. The devices include directional microphones and can be fine-tuned by smartphone apps. Whatever you do, don’t call these amplifiers hearing aids. They are not considered medical devices like the ones overseen by the Food and Drug Administration and dispensed by professionals to aid those with impaired hearing. Rather, they are over-the-counter systems cleared by the F.D.A. for occasional use in situations when speech and other sounds are hard to discern — say, in a noisy restaurant or while bird-watching. “The market is proliferating with lots of devices not necessarily made for impaired hearing, but for someone who wants a boost in certain challenging conditions like lectures,” said Neil J. DiSarno, chief staff officer for audiology at the American Speech-Language-Hearing Association. Dr. DiSarno is among the many audiologists who strongly urge people to see a physician first, in order to rule out medical causes of hearing loss, which could vary from earwax to a tumor, rather than self-diagnosing and self-treating a condition. Carole Rogin, president of the Hearing Industries Association, a trade group, said the biggest problem with personal amplification products was that people might use them instead of seeking appropriate medical oversight. “Untreated hearing loss is not a benign condition,” she said. “We want people to do something about it as soon as they notice a problem,” rather than using these devices to mask a potentially dangerous condition. © 2014 The New York Times Company
Link ID: 19401 - Posted: 03.24.2014
Jessica Morrison The human nose has roughly 400 types of scent receptors that can detect at least 1 trillion different odours. The human nose can distinguish at least 1 trillion different odours, a resolution orders of magnitude beyond the previous estimate of just 10,000 scents, researchers report today in Science1. Scientists who study smell have suspected a higher number for some time, but few studies have attempted to explore the limits of the human nose’s sensory capacity. “It has just been sitting there for somebody to do,” says study co-author Andreas Keller, an olfactory researcher at the Rockefeller University in New York. To investigate the limits of humans' sense of smell, Keller and his colleagues prepared scent mixtures with 10, 20 or 30 components selected from a collection of 128 odorous molecules. Then they asked 26 study participants to identify the mixture that smelled differently in a sample set where two of three scents were the same. When the two scents contained components that overlapped by more than about 51%, most participants struggled to discriminate between them. The authors then calculated the number of possible mixtures that overlap by less than 51% to arrive at their estimate of how many smells a human nose can detect: at least 1 trillion. Donald Wilson, an olfactory researcher at the New York University School of Medicine, says the findings are “thrilling.” He hopes that the new estimate will help researchers begin to unravel an enduring mystery: how the nose and brain work together to process smells. © 2014 Nature Publishing Group,
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19394 - Posted: 03.21.2014
By Lenny Bernstein When your name is Leonard Bernstein, and you can’t play or sing a note, people are, understandably, a bit prone to noting this little irony. But now I have an explanation: My lack of musical aptitude is mostly genetic. Finnish researchers say they have found genes responsible for auditory response and neuro-cognitive processing that partially explain musical aptitude. They note “several genes mostly related to the auditory pathway, not only specifically to inner ear function, but also to neurocognitive processes.” The study was published in the March 11 issue of the journal “Molecular Psychiatry.” In an e-mail, one of the researchers, Irma Jarvela, of the University of Helsinki’s department of medical genetics, said heredity explains 60 percent of the musical ability passed down through families like Bach’s. The rest can be attributed to environment and training. Genes most likely are responsible for “better perception skills of different sounds,” Jarvela said. Feel free to cite this research at your next karaoke night. © 1996-2014 The Washington Post
by Simon Makin It brings new meaning to having an ear for music. Musical aptitude may be partly down to genes that determine the architecture of the inner ear. We perceive sound after vibrations in the inner ear are detected by "hair cells" and transmitted to the brain as electrical signals. There, the inferior colliculus integrates the signals with other sensory information before passing it on to other parts of the brain for processing. To identify gene variants associated with musical aptitude, Irma Järvelä at the University of Helsinki, Finland, and her colleagues analysed the genomes of 767 people assessed for their ability to detect small differences between the pitch and duration of a sound, and musical pattern. The team compared the combined test scores with the prevalence of common variations in the participants' DNA. Genetic variations most strongly associated with high scores were found near the GATA2 gene – involved in the development of the inner ear and the inferior colliculus. Another gene, PCDH15, plays a role in the hair cells' ability to convert sound into brain signals. Jan Schnupp, an auditory neuroscientist at the University of Oxford, cautions that these findings should not be taken as evidence that genes determine musical ability. He points to the case of the profoundly deaf girl featured in the film "Lost and Sound". She became a superb pianist despite only hearing the world through cochlea implants, after meningitis damaged her inner ear. "Her case clearly demonstrates that even severe biological disadvantages can often be overcome," he says. "She would do extremely poorly at the pitch discrimination task used in this study." © Copyright Reed Business Information Ltd.
|By Allie Wilkinson Vivaldi versus the Beatles. Both great. But your brain may be processing the musical information differently for each. That’s according to research in the journal NeuroImage. [Vinoo Alluri et al, From Vivaldi to Beatles and back: Predicting lateralized brain responses to music] For the study, volunteers had their brains scanned by functional MRI as they listened to two musical medleys containing songs from different genres. The scans identified brain regions that became active during listening. One medley included four instrumental pieces and the other consisted of songs from the B side of Abbey Road. Computer algorithms were used to identify specific aspects of the music, which the researchers were able to match with specific, activated brain areas. The researchers found that vocal and instrumental music get treated differently. While both hemispheres of the brain deal with musical features, the presence of lyrics shifts the processing of musical features to the left auditory cortex. These results suggest that the brain’s hemispheres are specialized for different kinds of sound processing. A finding revealed but what you might call instrumental analysis. © 2014 Scientific American,
by Laura Sanders It truly pains me to bring you tired parents another round of “Is this bad for my baby?” But this week, a new study suggests that some white noise machines designed for babies can produce harmful amounts of sound. Before you despair about trashing your baby’s hearing, please keep in mind that like any study, the results are limited in what they can actually claim. And this one is no exception. I learned the power of white noise when Baby V and I ventured out to meet some new mamas for lunch. As I frantically tried to reverse the ensuing meltdown, another mom came over with her phone. “Try this,” she said as she held up her phone and blasted white noise. Lo and behold, her black magic worked. Instantly, Baby V snapped to attention, stopped screaming and stared wide-eyed at the dark wizardry that is the White Noise Lite app. Since then, I learned that when all else failed, the oscillating fan setting could occasionally jolt Baby V out of a screamfest. In general, I didn’t leave the noise on for long. It was annoying, and more importantly, it stopped working after the novelty wore off. But lots of parents do rely on white noise to soothe their babies and help them sleep through the night. These machines are recommended on top parenting websites by top pediatricians, parenting bloggers and, most convincingly, all of the other parents you know. Use liberally, the Internet experts recommend. To reap the benefits, white noise machines should be played all night long for at least the entire first year, many people think. And don’t be shy: The noise should be louder than you think. © Society for Science & the Public 2000 - 2013
Brian Owens The distinctive aroma of goats does more than just make barnyards extra fragrant. Male goats can use their heady scent to make female goats ovulate simply by being near them. Researchers had ascribed this 'male effect' to chemicals known as primer pheromones — a chemical signal that can cause long-lasting physiological responses in the recipient. Examples of primer pheromones are rare in mammals; the male effect in goats and sheep, and a similar effect in mice and rats, where the presence of males can speed up puberty in females, are the only known cases. But exactly what substances are at work and how has remained a mystery. Now, reproductive biologist Yukari Takeuchi from the University of Tokyo and her colleagues have identified a single molecule, known as 4-ethyloctanal, in the cocktail of male goat pheromones that activates the neural pathway that regulates reproduction in females1. ”It has long been thought that pheromones have pivotal roles in reproductive success in mammals, but the mechanisms are scarcely known,” says Takeuchi. The researchers found that male goat pheromones are generally synthesized in the animal's head skin, so they designed a hat containing a material that captured their odorous molecules and placed them on the goats for a week to collect the scent. Analysis of the gases collected identified a range of compounds, many of which were unknown and were not present in castrated males. When exposed to a cocktail of 18 of these chemicals, the brains of female goats showed a sudden increase in the activity of the gonadotropin-releasing hormone (GnRH) pulse generator — the neural regulator of reproduction. © 2014 Nature Publishing Group,
There is no biological cure for deafness—yet. We detect sound using sensory cells sporting microscopic hairlike projections, and when these so-called hair cells deep inside the inner ear are destroyed by illness or loud noise, they are gone forever. Or so scientists thought. A new study finds specific cells in the inner ear of newborn mice that regenerate these sensory cells—even after damage, potentially opening up a way to treat deafness in humans. Researchers knew that cells in the inner ear below hair cells—known as supporting cells—can become the sensory cells themselves when stimulated by a protein that blocks Notch signaling, which is an important mechanism for cell communication. Albert Edge, a stem cell biologist at Harvard Medical School in Boston, and his colleagues, attempted to identify the exact type of supporting cells that transform into sensory ones and fill in the gaps left by the damaged cells. The researchers removed the organ of Corti, which is housed within a seashell-shaped cavity called the cochlea and contains sensory hair cells, from newborn mice and kept the cells alive in culture plates. They damaged the hair cells using the antibiotic gentamicin, which destroys its sound-sensing projections. When they examined the organ of Corti under the microscope, they saw that small numbers of hair cells had regenerated on their own. But if they blocked Notch signaling, they saw even more regenerated hair cells, the team reports today in Stem Cell Reports. The number that developed varied, but in the base of cochlea, where the tissue received the most damage, hair cell numbers returned to about 40% of the original. “It’s interesting and encouraging that they are capable of regenerating,” Edge says. © 2014 American Association for the Advancement of Science.
By DEBORAH BLUM Toxicologists have long considered ethylene glycol, the active ingredient in many antifreeze and engine coolant formulas, to be a seductive and uniquely dangerous poison. For one thing, it’s sweet. “We actually had a mechanic who developed a taste for it,” recalled Dr. Marsha Ford, director of the Carolinas Poison Center in Charlotte, N.C. “He’d pour himself a little and sip it. And he kept doing that until he got sick.” And that’s the other danger: Ethylene glycol is a slow-acting poison. Even following a high dose, symptoms can take up to 48 hours to appear. The country’s poison control centers record more than 5,000 ethylene glycol ingestions annually; some 2,000 cases require medical treatment. Most are accidental, but ethylene glycol also figures in hundreds of suicide attempts every year — not to mention the occasional murder. Recently an Ohio woman was convicted of killing her fiancé by spiking raspberry iced tea with antifreeze. The situation for animals has been even more dangerous than for despised spouses. According to the Humane Society of the United States, as many as 90,000 pets and wild animals are poisoned annually by drinking spilled or carelessly stored products containing ethylene glycol. Now the manufacturers of those products have determined to do something about all the carnage. They are making antifreeze taste awful — so very bitter that it will be nigh impossible to drink by accident. © 2014 The New York Times Company
When you hear a friend’s voice, you immediately picture her, even if you can’t see her. And from the tone of her speech, you quickly gauge if she’s happy or sad. You can do all of this because your human brain has a “voice area.” Now, scientists using brain scanners and a crew of eager dogs have discovered that dog brains, too, have dedicated voice areas. The finding helps explain how canines can be so attuned to their owners’ feelings. “It’s absolutely brilliant, groundbreaking research,” says Pascal Belin, a neuroscientist at the University of Glasgow in the United Kingdom, who was part of the team that identified the voice areas in the human brain in 2000. “They’ve made the first comparative study using nonhuman primates of the cerebral processing of voices, and they’ve done it with a noninvasive technique by training dogs to lie in a scanner.” The scientists behind the discovery had previously shown that humans can readily distinguish between dogs’ happy and sad barks. “Dogs and humans share a similar social environment,” says Attila Andics, a neuroscientist in a research group at the Hungarian Academy of Sciences at Eötvös Loránd University in Budapest and the lead author of the new study. “So we wondered if dogs also get some social information from human voices.” To find out, Andics and his colleagues decided to scan the canine brain to see how it processes different types of sounds, including voices, barks, and natural noises. In humans, the voice area is activated when we hear others speak, helping us recognize a speaker’s identity and pick up on the emotional content in her voice. If dogs had voice areas, it could mean that these abilities aren’t limited to humans and other primates. © 2014 American Association for the Advancement of Science
Adrienne LaFrance For the better part of the past decade, Mark Kirby has been pouring drinks and booking gigs at the 55 Bar in New York City's Greenwich Village. The cozy dive bar is a neighborhood staple for live jazz that opened on the eve of Prohibition in 1919. It was the year Congress agreed to give American women the right to vote, and jazz was still in its infancy. Nearly a century later, the den-like bar is an anchor to the past in a city that's always changing. For Kirby, every night of work offers the chance to hear some of the liveliest jazz improvisation in Manhattan, an experience that's a bit like overhearing a great conversation. "There is overlapping, letting the other person say their piece, then you respond," Kirby told me. "Threads are picked up then dropped. There can be an overall mood and going off on tangents." Brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this music is syntactic, not semantic. The idea that jazz can be a kind of conversation has long been an area of interest for Charles Limb, an otolaryngological surgeon at Johns Hopkins. So Limb, a musician himself, decided to map what was happening in the brains of musicians as they played. He and a team of researchers conducted a study that involved putting a musician in a functional MRI machine with a keyboard, and having him play a memorized piece of music and then a made-up piece of music as part of an improvisation with another musician in a control room. What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations "take root in the brain as a language," Limb said. © 2014 by The Atlantic Monthly Group
By DOUGLAS QUENQUA The smell of a person’s earwax depends partly on his ethnic origin, a new study reports, suggesting that the substance could be an overlooked source of personal information. The earwax of Caucasian men contains more volatile organic compounds than that of East Asian men, researchers at the Monell Chemical Senses Center in Philadelphia found. Twelve such compounds are common to both groups, they said, but 11 of those are more plentiful in Caucasians. Monell researchers have previously found that underarm odor contains clues to a person’s age, health and sex. They suspected that earwax might contain similar markers, since a 2006 study found that a gene related to underarm odor, which also varies by ethnicity, helps determine a person’s type of earwax. (East Asians are more likely to have dry earwax, for example.) "We’re at the beginning of exploring a new and interesting biofluid secretion that has not been looked at in this manner before," said George Preti, an organic chemist at Monell and the senior author of the new study, which was published in The Journal of Chromatography B. Because of the fatty nature of earwax, or cerumen, Dr. Preti says it is a probable repository for odorants produced by diseases and the environment, and hence a potentially valuable diagnostic tool. A 2013 study showed that a whale’s earwax contains evidence of the animal’s exposure to pollutants and stress hormones, and earwax odor in humans is a known indicator of branched-chain ketoaciduria, also known as maple syrup urine disease. © 2014 The New York Times Company
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19258 - Posted: 02.18.2014
by Bethany Brookshire CHIGAGO – From a cockatoo bopping to the Backstreet Boys to a sea lion doing the boogie, nothing goes viral like an animal swaying to the music. Now, research shows that not only can bonobos feel the beat, they can play along. Music “engages the brain in a way that no other stimulus can,” says cognitive psychologist Edward Large of the University of Connecticut in Storrs. He and Patricia Gray, a biomusic researcher at the University of North Carolina at Greensboro, wanted to see if bonobos, which share 98.7 percent of their DNA with humans, might respond similarly to musical rhythms. The researchers gave a group of bonobos access to a specially tailored drum, then showed them people drumming rhythmically. Eventually three animals picked up the beat and were able to match tempos with the scientists. Bonobos were also found to prefer a faster pace than most people. Large and Gray presented their findings February 15 at the American Association for the Advancement of Science annual meeting. Rhythm involves the coordination of many brain areas, such as auditory and motor regions. Further research could help scientists understand whether only a few species can keep the beat, or if moving to the groove is widespread in the animal kingdom. © Society for Science & the Public 2000 - 2013.
Carl Zimmer In 2011, a 66-year-old retired math teacher walked into a London neurological clinic hoping to get some answers. A few years earlier, she explained to the doctors, she had heard someone playing a piano outside her house. But then she realized there was no piano. The phantom piano played longer and longer melodies, like passages from Rachmaninov’s Piano Concerto number 2 in C minor, her doctors recount in a recent study in the journal Cortex. By the time the woman — to whom the doctors refer only by her first name, Sylvia — came to the clinic, the music had become her nearly constant companion. Sylvia hoped the doctors could explain to her what was going on. Sylvia was experiencing a mysterious condition known as musical hallucinations. These are not pop songs that get stuck in your head. A musical hallucination can convince people there is a marching band in the next room, or a full church choir. Nor are musical hallucinations the symptoms of psychosis. People with musical hallucinations usually are psychologically normal — except for the songs they are sure someone is playing. The doctors invited Sylvia to volunteer for a study to better understand the condition. She agreed, and the research turned out to be an important step forward in understanding musical hallucinations. The scientists were able to compare her brain activity when she was experiencing hallucinations that were both quiet and loud — something that had never been done before. By comparing the two states, they found important clues to how the brain generates these illusions. If a broader study supports the initial findings, it could do more than help scientists understand how the brain falls prey to these phantom tunes. It may also shed light on how our minds make sense of the world. © 2014 The New York Times Company