Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 535

By Meredith Levine, Word went round Janice Mackay's quiet neighbourhood that she was hitting the bottle hard. She'd been seen more than once weaving along the sidewalk in front of her suburban home in Pickering, just outside Toronto, in a sad, drunken stagger. But Mackay wasn't drunk. As it turned out, her inner ear, the body's balance centre, had been destroyed by medication when she was hospitalized for over a month back in May 2005. At the time, Mackay was diagnosed with a life-threatening infection in one of her ovaries, and so was put on a cocktail of medication, including an IV drip of gentamicin, a well-known, inexpensive antibiotic that is one of the few that hasn't fallen prey to antibiotic-resistant bacteria. A few weeks later, the infection was almost gone when Mackay, still hospitalized, suddenly developed the bed spins and vomiting. Her medical team told her she'd been laying down too long and gave her Gravol, but the symptoms didn't go away. In a follow-up appointment after her discharge, Mackay was told that the dizziness was a side effect of the gentamicin, and that she would probably have to get used to it. But she didn't discover the extent of the damage until later when neurotologist Dr. John Rutka assessed her condition and concluded that the gentamicin had essentially destroyed her vestibular system, the body's motion detector, located deep within the inner ear. © CBC 2014

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 20198 - Posted: 10.13.2014

By Sarah C. P. Williams A wind turbine, a roaring crowd at a football game, a jet engine running full throttle: Each of these things produces sound waves that are well below the frequencies humans can hear. But just because you can’t hear the low-frequency components of these sounds doesn’t mean they have no effect on your ears. Listening to just 90 seconds of low-frequency sound can change the way your inner ear works for minutes after the noise ends, a new study shows. “Low-frequency sound exposure has long been thought to be innocuous, and this study suggests that it’s not,” says audiology researcher Jeffery Lichtenhan of the Washington University School of Medicine in in St. Louis, who was not involved in the new work. Humans can generally sense sounds at frequencies between 20 and 20,000 cycles per second, or hertz (Hz)—although this range shrinks as a person ages. Prolonged exposure to loud noises within the audible range have long been known to cause hearing loss over time. But establishing the effect of sounds with frequencies under about 250 Hz has been harder. Even though they’re above the lower limit of 20 Hz, these low-frequency sounds tend to be either inaudible or barely audible, and people don’t always know when they’re exposed to them. For the new study, neurobiologist Markus Drexl and colleagues at the Ludwig Maximilian University in Munich, Germany, asked 21 volunteers with normal hearing to sit inside soundproof booths and then played a 30-Hz sound for 90 seconds. The deep, vibrating noise, Drexl says, is about what you might hear “if you open your car windows while you’re driving fast down a highway.” Then, they used probes to record the natural activity of the ear after the noise ended, taking advantage of a phenomenon dubbed spontaneous otoacoustic emissions (SOAEs) in which the healthy human ear itself emits faint whistling sounds. © 2014 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 20144 - Posted: 10.01.2014

By JOHN ROGERS LOS ANGELES (AP) — The founder of a Los Angeles-based nonprofit that provides free music lessons to low-income students from gang-ridden neighborhoods began to notice several years ago a hopeful sign: Kids were graduating high school and heading off to UCLA, Tulane and other big universities. That’s when Margaret Martin asked how the children in the Harmony Project were beating the odds. Researchers at Northwestern University in Illinois believe that the students’ music training played a role in their educational achievement, helping as Martin noticed 90 percent of them graduate from high school while 50 percent or more didn’t from those same neighborhoods. A two-year study of 44 children in the program shows that the training changes the brain in ways that make it easier for youngsters to process sounds, according to results reported in Tuesday’s edition of The Journal of Neuroscience. That increased ability, the researchers say, is linked directly to improved skills in such subjects as reading and speech. But, there is one catch: People have to actually play an instrument to get smarter. They can’t just crank up the tunes on their iPod. Nina Kraus, the study’s lead researcher and director of Northwestern’s auditory neuroscience laboratory, compared the difference to that of building up one’s body through exercise. ‘‘I like to say to people: You’re not going to get physically fit just watching sports,’’ she said.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 20025 - Posted: 09.03.2014

Hearing voices is an experience that is very distressing for many people. Voices – or “auditory verbal hallucinations” – are one of the most common features of schizophrenia and other psychiatric disorders. But for a small minority of people, voice-hearing is a regular part of their lives, an everyday experience that isn’t associated with being unwell. It is only in the past 10 years that we have begun to understand what might be going on in “non-clinical” voice-hearing. Most of what we know comes from a large study conducted by Iris Sommer and colleagues at UMC Utrecht in the Netherlands. In 2006 they launched a nationwide attempt to find people who had heard voices before but didn’t have any sort of psychiatric diagnosis. From an initial response of over 4,000 people, they eventually identified a sample of 103 who heard voices at least once a month, but didn’t have psychosis. Their voice-hearing was also not caused by misuse of drugs or alcohol. Twenty-one of the participants were also given an MRI scan. When this group was compared with voice-hearers who did have psychosis, many of the same brain regions were active for both groups while they were experiencing auditory hallucinations, including the inferior frontal gyrus (involved in speech production) and the superior temporal gyrus (linked to speech perception). Subsequent studies with the same non-clinical voice-hearers have also highlighted differences in brain structure and functional connectivity (the synchronisation between different brain areas) compared with people who don’t hear voices. These results suggest that, on a neural level, the same sort of thing is going on in clinical and non-clinical voice-hearing. We know from first-person reports that the voices themselves can be quite similar, in terms of how loud they are, where they are coming from, and whether they speak in words or sentences. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 16: Psychopathology: Biological Basis of Behavior Disorders; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 12: Psychopathology: Biological Basis of Behavioral Disorders; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19958 - Posted: 08.14.2014

By NICHOLAS BAKALAR A new study reports that caffeine intake is associated with a reduced risk for tinnitus — ringing or buzzing in the ears. Researchers tracked caffeine use and incidents of tinnitus in 65,085 women in the Nurses’ Health Study II. They were 30 to 34 and without tinnitus at the start of the study. Over the next 18 years, 5,289 developed the disorder. The women recorded their use of soda, coffee and tea (caffeinated and not), as well as intake of candy and chocolate, which can contain caffeine. The results appear in the August issue of The American Journal of Medicine. Compared with women who consumed less than 150 milligrams of caffeine a day (roughly the amount in an eight-ounce cup of coffee), those who had 450 to 599 milligrams a day were 15 percent less likely to have tinnitus, and those who consumed 600 milligrams or more were 21 percent less likely. The association persisted after controlling for other hearing problems, hypertension, diabetes, use of anti-inflammatory Nsaid drugs, a history of depression and other factors. Decaffeinated coffee consumption had no effect on tinnitus risk. “We can’t conclude that caffeine is a cure for tinnitus,” said the lead author, Dr. Jordan T. Glicksman, a resident physician at the University of Western Ontario. “But our results should provide some assurance to people who do drink caffeine that it’s reasonable to continue doing so.” © 2014 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 19955 - Posted: 08.14.2014

|By Ingrid Wickelgren One important function of your inner ear is stabilizing your vision when your head is turning. When your head turns one way, your vestibular system moves your eyes in the opposite direction so that what you are looking at remains stable. To see for yourself how your inner ears make this adjustment, called the vestibulo-ocular reflex, hold your thumb upright at arm’s length. Shake your head back and forth about twice per second while looking at your thumb. See that your thumb remains in focus. Now create the same relative motion by swinging your arm back and forth about five inches at the same speed. Notice that your thumb is blurry. To see an object clearly, the image must remain stationary on your retina. When your head turns, your vestibular system very rapidly moves your eyes in the opposite direction to create this stability. When the thumb moves, your visual system similarly directs the eyes to follow, but the movement is too slow to track a fast-moving object, causing blur. © 2014 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 19895 - Posted: 07.30.2014

|By James Phillips Our inner ear is a marvel. The labyrinthine vestibular system within it is a delicate, byzantine structure made up of tiny canals, crystals and pouches. When healthy, this system enables us to keep our balance and orient ourselves. Unfortunately, a study in the Archives of Internal Medicine found that 35 percent of adults over age 40 suffer from vestibular dysfunction. A number of treatments are available for vestibular problems. During an acute attack of vertigo, vestibular suppressants and antinausea medications can reduce the sensation of motion as well as nausea and vomiting. Sedatives can help patients sleep and rest. Anti-inflammatory drugs can reduce any damage from acute inflammation and antibiotics can treat an infection. If a structural change in the inner ear has loosened some of its particulate matter—for instance, if otolith (calcareous) crystals, which are normally in tilt-sensitive sacs, end up in the semicircular canals, making the canals tilt-sensitive—simple repositioning exercises in the clinic can shake the loose material, returning it where it belongs. After a successful round of therapy, patients no longer sense that they are tilting whenever they turn their heads. If vertigo is a recurrent problem, injecting certain medications can reduce or eliminate the fluctuating function in the affected ear. As a last resort, a surgeon can effectively destroy the inner ear—either by directly damaging the end organs or by cutting the eighth cranial nerve fibers, which carry vestibular information to the brain. The latter surgery involves removing a portion of the skull and shifting the brain sideways, so it is not for the faint of heart. © 2014 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19886 - Posted: 07.28.2014

by Claudia Caruana GOT that ringing in your ears? Tinnitus, the debilitating condition that plagued Beethoven and Darwin, affects roughly 10 per cent of the world's population, including 30 million people in the US alone. Now, a device based on vagus nerve stimulation promises to eliminate the sounds for good by retraining the brain. At the moment, many chronic sufferers turn to state of the art hearing aids configured to play specific tones meant to cancel out the tinnitus. But these do not always work because they just mask the noise. The new device, developed by MicroTransponder in Dallas, Texas, works in an entirely different way. The Serenity System uses a transmitter connected to the vagus nerve in the neck – the vagus nerve connects the brain to many of the body's organs. The thinking goes that most cases of chronic tinnitus result from changes in the signals sent from the ear to neurons in the brain's auditory cortex. This device is meant to retrain those neurons to forget the annoying noise. To use the system, a person wears headphones and listens to computer-generated sounds. First, they listen to tones that trigger the tinnitus before being played different frequencies close to the problematic one. Meanwhile, the implant stimulates the vagus nerve with small pulses. The pulses trigger the release of chemicals that increase the brain's ability to reconfigure itself. The process has already worked in rats (Nature, doi.org/b63kt9) and in a small human trial this year, where it helped around half of the participants. "Vagus nerve stimulation takes advantage of the brain's neuroplasticity – the ability to reconfigure itself," says Michael Kilgard at the University of Texas at Dallas, and a consultant to MicroTransponder. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19880 - Posted: 07.26.2014

Philip Ball Lead guitarists usually get to play the flashy solos while the bass player gets only to plod to the beat. But this seeming injustice could have been determined by the physiology of hearing. Research published today in the Proceedings of the National Academy of Sciences1 suggests that people’s perception of timing in music is more acute for lower-pitched notes. Psychologist Laurel Trainor of McMaster University in Hamilton, Canada, and her colleagues say that their findings explain why in the music of many cultures the rhythm is carried by low-pitched instruments while the melody tends to be taken by the highest pitched. This is as true for the low-pitched percussive rhythms of Indian classical music and Indonesian gamelan as it is for the walking double bass of a jazz ensemble or the left-hand part of a Mozart piano sonata. Earlier studies2 have shown that people have better pitch discrimination for higher notes — a reason, perhaps, that saxophonists and lead guitarists often have solos at a squealing register. It now seems that rhythm works best at the other end of the scale. Trainor and colleagues used the technique of electroencephalography (EEG) — electrical sensors placed on the scalp — to monitor the brain signals of people listening to streams of two simultaneous piano notes, one high-pitched and the other low-pitched, at equally spaced time intervals. Occasionally, one of the two notes was played slightly earlier, by just 50 milliseconds. The researchers studied the EEG recordings for signs that the listeners had noticed. © 2014 Nature Publishing Group,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19776 - Posted: 07.01.2014

by Frank Swain WHEN it comes to personal electronics, it's difficult to imagine iPhones and hearing aids in the same sentence. I use both and know that hearing aids have a well-deserved reputation as deeply uncool lumps of beige plastic worn mainly by the elderly. Apple, on the other hand, is the epitome of cool consumer electronics. But the two are getting a lot closer. The first "Made for iPhone" hearing aids have arrived, allowing users to stream audio and data between smartphones and the device. It means hearing aids might soon be desirable, even to those who don't need them. A Bluetooth wireless protocol developed by Apple last year lets the prostheses connect directly to Apple devices, streaming audio and data while using a fraction of the power consumption of conventional Bluetooth. LiNX, made by ReSound (pictured), and Halo hearing aids made by Starkey – both international firms – use the iPhone as a platform to offer users new features and added control over their hearing aids. "The main advantage of Bluetooth is that the devices are talking to each other, it's not just one way," says David Nygren, UK general manager of ReSound. This is useful as hearing aids have long suffered from a restricted user interface – there's not much room for buttons on a device the size of a kidney bean. This is a major challenge for hearing-aid users, because different environments require different audio settings. Some devices come with preset programmes, while others adjust automatically to what their programming suggests is the best configuration. This is difficult to get right, and often devices calibrated in the audiologist's clinic fall short in the real world. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19757 - Posted: 06.23.2014

Regina Nuzzo Gene therapy delivered to the inner ear can help shrivelled auditory nerves to regrow — and in turn, improve bionic ear technology, researchers report today in Science Translational Medicine1. The work, conducted in guinea pigs, suggests a possible avenue for developing a new generation of hearing prosthetics that more closely mimics the richness and acuity of natural hearing. Sound travels from its source to ears, and eventually to the brain, through a chain of biological translations that convert air vibrations to nerve impulses. When hearing loss occurs, it’s usually because crucial links near the end of this chain — between the ear’s cochlear cells and the auditory nerve — are destroyed. Cochlear implants are designed to bridge this missing link in people with profound deafness by implanting an array of tiny electrodes that stimulate the auditory nerve. Although cochlear implants often work well in quiet situations, people who have them still struggle to understand music or follow conversations amid background noise. After long-term hearing loss, the ends of the auditory nerve bundles are often frayed and withered, so the electrode array implanted in the cochlea must blast a broad, strong signal to try to make a connection, instead of stimulating a more precise array of neurons corresponding to particular frequencies. The result is an ‘aural smearing’ that obliterates fine resolution of sound, akin to forcing a piano player to wear snow mittens or a portrait artist to use finger paints. To try to repair auditory nerve endings and help cochlear implants to send a sharper signal to the brain, researchers turned to gene therapy. Their method took advantage of the electrical impulses delivered by the cochlear-implant hardware, rather than viruses often used to carry genetic material, to temporarily turn inner-ear cells porous. This allowed DNA to slip in, says lead author Jeremy Pinyon, an auditory scientist at the University of New South Wales in Sydney, Australia. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19533 - Posted: 04.24.2014

By KATHERINE BOUTON Like almost all newborns in this country, Alex Justh was given a hearing test at birth. He failed, but his parents were told not to worry: He was a month premature and there was mucus in his ears. A month later, an otoacoustic emission test, which measures the response of hair cells in the inner ear, came back normal. Alex was the third son of Lydia Denworth and Mark Justh (pronounced Just), and at first they “reveled at what a sweet and peaceful baby he was,” Ms. Denworth writes in her new book, “I Can Hear You Whisper: An Intimate Journey Through the Science of Sound and Language,” being published this week by Dutton. But Alex began missing developmental milestones. He was slow to sit up, slow to stand, slow to walk. His mother felt a “vague uneasiness” at every delay. He seemed not to respond to questions, the kind one asks a baby: “Can you show me the cow?” she’d ask, reading “Goodnight, Moon.” Nothing. No response. At 18 months Alex unequivocally failed a hearing test, but there was still fluid in his ears, so the doctor recommended a second test. It wasn’t until 2005, when Alex was 2 ½, that they finally realized he had moderate to profound hearing loss in both ears. This is very late to detect deafness in a child; the ideal time is before the first birthday. Alex’s parents took him to Dr. Simon Parisier, an otolaryngologist at New York Eye and Ear Infirmary, who recommended a cochlear implant as soon as possible. “Age 3 marked a critical juncture in the development of language,” Ms. Denworth writes. “I began to truly understand that we were not just talking about Alex’s ears. We were talking about his brain.” © 2014 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Our Divided Brain
Link ID: 19485 - Posted: 04.15.2014

Nicola Davis The moment when 40-year old Joanne Milne, who has been deaf since birth, first hears sound is heart-wrenching scene. Amateur footage showing her emotional reaction has taken social media by storm and touched viewers across the world, reinforcing the technological triumph of cochlear implants. It’s a story I have touched on before. Earlier this month I wrote about how cochlear implants changed the lives of the Campbells whose children Alice and Oliver were born with the condition auditory neuropathy spectrum disorder (ANSD). Implants, together with auditory verbal therapy, have allowed them to embrace the hearing world. It was incredibly moving to glimpse the long and difficult journey this family had experienced, and the joy that hearing - a sense so many of us take for granted - can bring. Cochlear implants are not a ‘cure’ for deafness. They make use of electrodes to directly stimulate auditory nerve fibres in the cochlea of the inner ear, creating a sense of sound that is not the same as that which hearing people experience, but nevertheless allows users to perceive speech, develop language and often enjoy music. As an adult Milne, who was born with the rare condition Usher syndrome, is unusual in receiving cochlear implants on both sides. Such bilateral implantation enables users to work out where sounds are coming from, enhances speech perception in bustling environments and means that should something go wrong with one device, the user isn’t cut off from the hearing world. © 2014 Guardian News

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19421 - Posted: 03.29.2014

By ANNE EISENBERG People who strain to hear conversations in noisy places sometimes shun hearing appliances as telltale signs of aging. But what if those devices looked like wireless phone receivers? Some companies are betting that the high-tech look of a new generation of sound amplifiers will tempt people to try them. The new in-ear amps come with wireless technology and typically cost $300 to $500. The devices include directional microphones and can be fine-tuned by smartphone apps. Whatever you do, don’t call these amplifiers hearing aids. They are not considered medical devices like the ones overseen by the Food and Drug Administration and dispensed by professionals to aid those with impaired hearing. Rather, they are over-the-counter systems cleared by the F.D.A. for occasional use in situations when speech and other sounds are hard to discern — say, in a noisy restaurant or while bird-watching. “The market is proliferating with lots of devices not necessarily made for impaired hearing, but for someone who wants a boost in certain challenging conditions like lectures,” said Neil J. DiSarno, chief staff officer for audiology at the American Speech-Language-Hearing Association. Dr. DiSarno is among the many audiologists who strongly urge people to see a physician first, in order to rule out medical causes of hearing loss, which could vary from earwax to a tumor, rather than self-diagnosing and self-treating a condition. Carole Rogin, president of the Hearing Industries Association, a trade group, said the biggest problem with personal amplification products was that people might use them instead of seeking appropriate medical oversight. “Untreated hearing loss is not a benign condition,” she said. “We want people to do something about it as soon as they notice a problem,” rather than using these devices to mask a potentially dangerous condition. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19401 - Posted: 03.24.2014

By Lenny Bernstein When your name is Leonard Bernstein, and you can’t play or sing a note, people are, understandably, a bit prone to noting this little irony. But now I have an explanation: My lack of musical aptitude is mostly genetic. Finnish researchers say they have found genes responsible for auditory response and neuro-cognitive processing that partially explain musical aptitude. They note “several genes mostly related to the auditory pathway, not only specifically to inner ear function, but also to neurocognitive processes.” The study was published in the March 11 issue of the journal “Molecular Psychiatry.” In an e-mail, one of the researchers, Irma Jarvela, of the University of Helsinki’s department of medical genetics, said heredity explains 60 percent of the musical ability passed down through families like Bach’s. The rest can be attributed to environment and training. Genes most likely are responsible for “better perception skills of different sounds,” Jarvela said. Feel free to cite this research at your next karaoke night. © 1996-2014 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19385 - Posted: 03.20.2014

by Simon Makin It brings new meaning to having an ear for music. Musical aptitude may be partly down to genes that determine the architecture of the inner ear. We perceive sound after vibrations in the inner ear are detected by "hair cells" and transmitted to the brain as electrical signals. There, the inferior colliculus integrates the signals with other sensory information before passing it on to other parts of the brain for processing. To identify gene variants associated with musical aptitude, Irma Järvelä at the University of Helsinki, Finland, and her colleagues analysed the genomes of 767 people assessed for their ability to detect small differences between the pitch and duration of a sound, and musical pattern. The team compared the combined test scores with the prevalence of common variations in the participants' DNA. Genetic variations most strongly associated with high scores were found near the GATA2 gene – involved in the development of the inner ear and the inferior colliculus. Another gene, PCDH15, plays a role in the hair cells' ability to convert sound into brain signals. Jan Schnupp, an auditory neuroscientist at the University of Oxford, cautions that these findings should not be taken as evidence that genes determine musical ability. He points to the case of the profoundly deaf girl featured in the film "Lost and Sound". She became a superb pianist despite only hearing the world through cochlea implants, after meningitis damaged her inner ear. "Her case clearly demonstrates that even severe biological disadvantages can often be overcome," he says. "She would do extremely poorly at the pitch discrimination task used in this study." © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19360 - Posted: 03.13.2014

|By Allie Wilkinson Vivaldi versus the Beatles. Both great. But your brain may be processing the musical information differently for each. That’s according to research in the journal NeuroImage. [Vinoo Alluri et al, From Vivaldi to Beatles and back: Predicting lateralized brain responses to music] For the study, volunteers had their brains scanned by functional MRI as they listened to two musical medleys containing songs from different genres. The scans identified brain regions that became active during listening. One medley included four instrumental pieces and the other consisted of songs from the B side of Abbey Road. Computer algorithms were used to identify specific aspects of the music, which the researchers were able to match with specific, activated brain areas. The researchers found that vocal and instrumental music get treated differently. While both hemispheres of the brain deal with musical features, the presence of lyrics shifts the processing of musical features to the left auditory cortex. These results suggest that the brain’s hemispheres are specialized for different kinds of sound processing. A finding revealed but what you might call instrumental analysis. © 2014 Scientific American,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19359 - Posted: 03.13.2014

by Laura Sanders It truly pains me to bring you tired parents another round of “Is this bad for my baby?” But this week, a new study suggests that some white noise machines designed for babies can produce harmful amounts of sound. Before you despair about trashing your baby’s hearing, please keep in mind that like any study, the results are limited in what they can actually claim. And this one is no exception. I learned the power of white noise when Baby V and I ventured out to meet some new mamas for lunch. As I frantically tried to reverse the ensuing meltdown, another mom came over with her phone. “Try this,” she said as she held up her phone and blasted white noise. Lo and behold, her black magic worked. Instantly, Baby V snapped to attention, stopped screaming and stared wide-eyed at the dark wizardry that is the White Noise Lite app. Since then, I learned that when all else failed, the oscillating fan setting could occasionally jolt Baby V out of a screamfest. In general, I didn’t leave the noise on for long. It was annoying, and more importantly, it stopped working after the novelty wore off. But lots of parents do rely on white noise to soothe their babies and help them sleep through the night. These machines are recommended on top parenting websites by top pediatricians, parenting bloggers and, most convincingly, all of the other parents you know. Use liberally, the Internet experts recommend. To reap the benefits, white noise machines should be played all night long for at least the entire first year, many people think. And don’t be shy: The noise should be louder than you think. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 19315 - Posted: 03.03.2014

There is no biological cure for deafness—yet. We detect sound using sensory cells sporting microscopic hairlike projections, and when these so-called hair cells deep inside the inner ear are destroyed by illness or loud noise, they are gone forever. Or so scientists thought. A new study finds specific cells in the inner ear of newborn mice that regenerate these sensory cells—even after damage, potentially opening up a way to treat deafness in humans. Researchers knew that cells in the inner ear below hair cells—known as supporting cells—can become the sensory cells themselves when stimulated by a protein that blocks Notch signaling, which is an important mechanism for cell communication. Albert Edge, a stem cell biologist at Harvard Medical School in Boston, and his colleagues, attempted to identify the exact type of supporting cells that transform into sensory ones and fill in the gaps left by the damaged cells. The researchers removed the organ of Corti, which is housed within a seashell-shaped cavity called the cochlea and contains sensory hair cells, from newborn mice and kept the cells alive in culture plates. They damaged the hair cells using the antibiotic gentamicin, which destroys its sound-sensing projections. When they examined the organ of Corti under the microscope, they saw that small numbers of hair cells had regenerated on their own. But if they blocked Notch signaling, they saw even more regenerated hair cells, the team reports today in Stem Cell Reports. The number that developed varied, but in the base of cochlea, where the tissue received the most damage, hair cell numbers returned to about 40% of the original. “It’s interesting and encouraging that they are capable of regenerating,” Edge says. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19281 - Posted: 02.22.2014

Adrienne LaFrance For the better part of the past decade, Mark Kirby has been pouring drinks and booking gigs at the 55 Bar in New York City's Greenwich Village. The cozy dive bar is a neighborhood staple for live jazz that opened on the eve of Prohibition in 1919. It was the year Congress agreed to give American women the right to vote, and jazz was still in its infancy. Nearly a century later, the den-like bar is an anchor to the past in a city that's always changing. For Kirby, every night of work offers the chance to hear some of the liveliest jazz improvisation in Manhattan, an experience that's a bit like overhearing a great conversation. "There is overlapping, letting the other person say their piece, then you respond," Kirby told me. "Threads are picked up then dropped. There can be an overall mood and going off on tangents." Brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this music is syntactic, not semantic. The idea that jazz can be a kind of conversation has long been an area of interest for Charles Limb, an otolaryngological surgeon at Johns Hopkins. So Limb, a musician himself, decided to map what was happening in the brains of musicians as they played. He and a team of researchers conducted a study that involved putting a musician in a functional MRI machine with a keyboard, and having him play a memorized piece of music and then a made-up piece of music as part of an improvisation with another musician in a control room. What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations "take root in the brain as a language," Limb said. © 2014 by The Atlantic Monthly Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Our Divided Brain
Link ID: 19275 - Posted: 02.20.2014