Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 539

Zoe Cormier A study of two ancient hominins from South Africa suggests that changes in the shape and size of the middle ear occurred early in our evolution. Such alterations could have profoundly changed what our ancestors could hear — and perhaps how they could communicate. Palaeoanthropologist Rolf Quam of Binghamton University in New York state and his colleagues recovered and analysed a complete set of the three tiny middle-ear bones, or ossicles, from a 1.8-million-year-old specimen of Paranthropus robustus and an incomplete set of ossicles from Australopithecus africanus, which lived from about 3.3 million to around 2.1 million years ago. The ossicles are the smallest bones in the human body, and are rarely preserved intact in hominin fossils, Quam says. In both specimens, the team found that the malleus (the first in the chain of the three middle-ear bones) was human-like — smaller in proportion compared to the ones in our ape relatives. Its size would also imply a smaller eardrum. The similarity between the two species points to a “deep and ancient origin” of this feature, Quam says. “This could be like bipedalism: a defining characteristic of hominins.” It is hard to draw conclusions about hearing just from the shape of the middle-ear bones because the process involves so many different ear structures, as well as the brain itself. However, some studies have shown that the relative sizes of the middle-ear bones do affect what primates can hear2. Genomic comparisons with gorillas have indicated that changes in the genes that code for these structures might also demarcate humans from apes3. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18151 - Posted: 05.14.2013

by Michael Balter Researchers debate when language first evolved, but one thing is sure: Language requires us not only to talk but also to listen. A team of scientists now reports recovering the earliest known complete set of the three tiny middle ear bones—the malleus ("hammer"), incus ("anvil"), and stapes ("stirrup")—in a 2.0-million-year-old skull of Paranthropus robustus, a distant human relative found in South Africa (see photo). Reporting online today in the Proceedings of the National Academy of Sciences, the researchers found that the malleus of P. robustus, as well one found earlier in the early human relative Australopithecus africanus, is similar to that of modern humans, whereas the two other ear bones most closely resemble existing African and Asian great apes. The team is not entirely sure what this precocious appearance of a human-like malleus means. But since the malleus is attached directly to the eardrum, the researchers suggest that it might be an early sign of the high human sensitivity to middle-range acoustic frequencies between 2 and 4 kilohertz—frequencies critical to spoken language, but which apes and other primates are much less sensitive to. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18150 - Posted: 05.14.2013

Ed Yong Many moths have evolved sensitive hearing that can pick up the ultrasonic probes of bats that want to eat them. But one species comes pre-adapted for anything that bats might bring to this evolutionary arms race. Even though its ears are extremely simple — a pair of eardrums on its flanks that each vibrate four receptor cells — it can sense frequencies up to 300 kilohertz, well beyond the range of any other animal and higher than any bat can squeak. “A lot of previous work has suggested that some bats have evolved calls that are out of the hearing range of the moths they are hunting. But this moth can hear the calls of any bat,” says James Windmill, an acoustical engineer at the University of Strathclyde, UK, who discovered the ability in the greater wax moth (Galleria mellonella). His study is published in Biology Letters1. Windmill's collaborator Hannah Moir, a bioacoustician now at the University of Leeds, UK, played sounds of varying frequencies to immobilized wax moths. As the insects “listened”, Moir used a laser to measure the vibrations of their eardrums, and electrodes to record the activity of their auditory nerves. The moths were most sensitive to frequencies of around 80 kilohertz, the average frequency of their courtship calls. But when exposed to 300 kilohertz, the highest level that the team tested, the insects' eardrums still vibrated and their neurons still fired. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18137 - Posted: 05.09.2013

By Jill U. Adams, Urban living may be harmful to your ears. That’s the takeaway from a study that found that more than eight in 10 New Yorkers were exposed to enough noise to damage their hearing. Perhaps more surprising was that so much of the city dwellers’ noise exposure was related not to noisy occupations but rather to voluntary activities such as listening to music. Which makes it hard for me not to worry that when my 16-year-old son is sitting nearby with his earbuds in, I can hear his music. There’s a pretty good chance that he’s got the volume up too loud — loud enough to potentially damage the sensory cells deep in his ear and eventually lead to permanent hearing loss. That’s according to Christopher Chang, an ear, nose and throat doctor at Fauquier ENT Consultants in Warrenton, who sees patients every day with hearing-related issues. “What he’s hearing is way too loud, because it’s concentrated directly into the ear itself,” he says of my son, adding that the anatomy of the ear magnifies sound as it travels through the ear canal. Listening to music through earbuds or headphones is just one way that many of us are routinely exposed to excessive noise. Mowing the lawn, going to a nightclub, riding the Metro, using a power drill, working in a factory, playing in a band and riding a motorcycle are activities that can lead to hearing problems. Aging is the primary cause of hearing loss; noise is second, says Brian Fligor, who directs the diagnostic audiology program at Boston Children’s Hospital, and it’s usually the culprit when the condition affects younger people. Approximately 15 percent of American adults between the ages of 20 and 69 have high-frequency hearing loss, probably the result of noise exposure, according to the National Institute on Deafness and Other Communication Disorders. © 1996-2013 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18038 - Posted: 04.15.2013

By Meghan Rosen Whether you’re rocking out to Britney Spears or soaking up Beethoven’s classics, you may be enjoying music because it stimulates a guessing game in your brain. This mental puzzling explains why humans like music, a new study suggests. By looking at activity in just one part of the brain, researchers could predict roughly how much volunteers dug a new song. When people hear a new tune they like, a clump of neurons deep within their brains bursts into excited activity, researchers report April 12 in Science. The blueberry-sized cluster of cells, called the nucleus accumbens, helps make predictions and sits smack-dab in the “reward center” of the brain — the part that floods with feel-good chemicals when people eat chocolate or have sex. The berry-sized bit acts with three other regions in the brain to judge new jams, MRI scans showed. One region looks for patterns, another compares new songs to sounds heard before, and the third checks for emotional ties. As our ears pick up the first strains of a new song, our brains hustle to make sense of the music and figure out what’s coming next, explains coauthor Valorie Salimpoor, who is now at the Baycrest Rotman Research Institute in Toronto. And when the brain’s predictions are right (or pleasantly surprising), people get a little jolt of pleasure. All four brain regions work overtime when people listen to new songs they like, report the researchers, including Robert Zatorre of the Montreal Neurological Institute at McGill University © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 18030 - Posted: 04.13.2013

By Andrew Grant A simple plastic shell has cloaked a three-dimensional object from sound waves for the first time. With some improvements, a similar cloak could eventually be used to reduce noise pollution and to allow ships and submarines to evade enemy detection. The experiments appear March 20 in Physical Review Letters. “This paper implements a simplified version of invisibility using well-designed but relatively simple materials,” says Steven Cummer, an electrical engineer at Duke University, who was not involved in the study. Cummer proposed the concept of a sound cloak in 2007. Scientists’ recent efforts to render objects invisible to the eye are based on the fact that our perception of the world depends on the scattering of waves. We can see objects because waves of light strike them and scatter. Similarly, the Navy can detect faraway submarines because they scatter sound waves (sonar) that hit them. So for the last several years scientists have been developing cloaks that prevent scattering by steering light or sound waves around an object. The drawback of this approach, however, is that it requires complex synthetic materials that are difficult to produce. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17966 - Posted: 03.30.2013

By Susan Milius Hey evolution, thanks for nothing. When a mammal embryo develops, its middle ear appears to form in a pop-and-patch way that seals one end with substandard, infection-prone tissue. “The way evolution works doesn’t always create the most perfect, engineered structure,” says Abigail Tucker, a developmental biologist at King’s College London. “Definitely, it’s made an ear that’s slightly imperfect.” The mammalian middle ear catches sound and transfers it, using three tiny bones that jiggle against the eardrum, to the inner ear chamber. Those three bones — the hammer, anvil and stirrup — are a distinctive trait that distinguishes the group from other evolutionary lineages. Research in mouse embryos finds that the middle ear begins as a pouch of tissue. Then its lining ruptures at one end and the break lets in a different kind of tissue, which forms the tiny bones of the middle ear. This intruding tissue originates from what’s called the embryo’s neural crest, a population of cells that gives rise to bone and muscle. Neural crest tissue has never been known before to create a barrier in the body. Yet as the mouse middle ear forms, this tissue creates a swath of lining that patches the rupture, Tucker and colleague Hannah Thompson, report in the March 22 Science. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17947 - Posted: 03.25.2013

By JANE E. BRODY Noise, not age is the leading cause of hearing loss. Unless you take steps now to protect to your ears, sooner or later many of you — and your children — will have difficulty understanding even ordinary speech. Tens of millions of Americans, including 12 percent to 15 percent of school-age children, already have permanent hearing loss caused by the everyday noise that we take for granted as a fact of life. “The sad truth is that many of us are responsible for our own hearing loss,” writes Katherine Bouton in her new book, “Shouting Won’t Help: Why I — and 50 Million Other Americans — Can’t Hear You.” The cause, she explains, is “the noise we blithely subject ourselves to day after day.” While there are myriad regulations to protect people who work in noisy environments, there are relatively few governing repeated exposure to noise outside the workplace, from portable music devices, rock concerts, hair dryers, sirens, lawn mowers, leaf blowers, vacuum cleaners, car alarms and countless other sources. We live in a noisy world, and every year it seems to get noisier. Ms. Bouton notes that the noise level inside Allen Fieldhouse at the University of Kansas often exceeds that of a chain saw. After poor service, noise is the second leading complaint about restaurants. Proprietors believe that people spend more on food and drink in bustling eateries, and many have created new venues or retrofitted old ones to maximize sound levels. Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17946 - Posted: 03.25.2013

By Rachel Ehrenberg BOSTON — For the first time, researchers have snapped pictures of mouse inner ear cells using an approach that doesn’t damage tissue or require elaborate dyes. The approach could offer a way to investigate hearing loss — the most common sensory deficit in the world — and may help guide the placement of cochlear devices or other implants. Inner ear damage and the deafness that results have long challenged scientists. The small delicate cochlea and associated parts are encased in the densest bone in the body and near crucial anatomical landmarks, including the jugular vein, carotid artery and facial nerve, which make them difficult to access. With standard anatomical imaging techniques such as MRI, the inner ear just looks like a small grey blob. “We can’t biopsy it, we can’t image it, so it’s very difficult to figure out why people are deaf,” said ear surgeon and neuroscientist Konstantina Stankovic of the Massachusetts Eye and Ear Infirmary in Boston. Stankovic and her colleagues took a peek at inner ear cells using an existing technique called two-photon microscopy. This approach shoots photons at the target tissue, exciting particular molecules that then emit light. The researchers worked with mice exposed to 160 decibels of sound for two hours —levels comparable to the roaring buzz of a snowmobile or power tools. Then they removed the rodents’ inner ears, which includes the spiraled, snail-shaped cochlea and other organs. Instead of cutting into the cochlea, the researchers peered through the “round window” — a middle ear opening covered by a thin membrane that leads to the cochlea. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 17820 - Posted: 02.19.2013

by Kelli Whitlock Burton Having a conversation in a noisy restaurant can be difficult. For many elderly adults, it's often impossible. But with a little practice, the brain can learn to hear above the din, a new study suggests. Age-related hearing loss can involve multiple components, such as the disappearance of sensory cells in the inner ear. But scientists say that part of the problem may stem from our brains. As we get older, our brains slow down—a natural part of aging called neural slowing. One side effect of this sluggishness is the inability to process the fast-moving parts of speech, particularly consonants at the beginning of words that sound alike, such as "b," "p," "g," and "d." Add background noise to the mix and "bad" may sound like "dad," says Nina Kraus, director of the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Illinois. "Neural slowing especially affects our ability to hear in a noisy background because the sounds we need to hear are acoustically less salient and because noise also taxes our ability to remember what we hear." Building on animal studies that pointed to an increase in neural speed following auditory training, Kraus and colleagues enrolled 67 people aged 55 to 70 years old with no hearing loss or dementia in an experiment. Half the group completed about 2 months of exercises with Brain Fitness, a commercially available auditory training program by Posit Science. (The team has no connection to the company.) The exercises helped participants better identify different speech sounds and distinguish between similar-sounding syllables, such as "ba" or "ta." © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17800 - Posted: 02.14.2013

By KATHERINE BOUTON At a party the other night, a fund-raiser for a literary magazine, I found myself in conversation with a well-known author whose work I greatly admire. I use the term “conversation” loosely. I couldn’t hear a word he said. But worse, the effort I was making to hear was using up so much brain power that I completely forgot the titles of his books. A senior moment? Maybe. (I’m 65.) But for me, it’s complicated by the fact that I have severe hearing loss, only somewhat eased by a hearing aid and cochlear implant. Dr. Frank Lin, an otolaryngologist and epidemiologist at Johns Hopkins School of Medicine, describes this phenomenon as “cognitive load.” Cognitive overload is the way it feels. Essentially, the brain is so preoccupied with translating the sounds into words that it seems to have no processing power left to search through the storerooms of memory for a response. Over the past few years, Dr. Lin has delivered unwelcome news to those of us with hearing loss. His work looks “at the interface of hearing loss, gerontology and public health,” as he writes on his Web site. The most significant issue is the relation between hearing loss and dementia. In a 2011 paper in The Archives of Neurology, Dr. Lin and colleagues found a strong association between the two. The researchers looked at 639 subjects, ranging in age at the beginning of the study from 36 to 90 (with the majority between 60 and 80). The subjects were part of the Baltimore Longitudinal Study of Aging. None had cognitive impairment at the beginning of the study, which followed subjects for 18 years; some had hearing loss. Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17787 - Posted: 02.12.2013

by Jacob Aron The mystery of how our brains perceive sound has deepened, now that musicians have smashed a limit on sound perception imposed by a famous algorithm. On the upside this means it should be possible to improve upon today's gold-standard methods for audio perception. Devised over 200 years ago, the Fourier transform is a mathematical process that splits a sound wave into its individual frequencies. It is the most common method for digitising analogue signals and some had thought that brains make use of the same algorithm when turning the cacophony of noise around us into individual sounds and voices. To investigate, Jacob Oppenheim and Marcelo Magnasco of Rockefeller University in New York turned to the Gabor limit, a part of the Fourier transform's mathematics that makes the determination of pitch and timing a trade-off. Rather like the uncertainty principle of quantum mechanics, the Gabor limit states you can't accurately determine a sound's frequency and its duration at the same time. 13 times better The pair reasoned that if people's hearing obeyed the Gabor limit, this would be a sign that they were using the Fourier transform. But when 12 musicians, some instrumentalists, some conductors, took a series of tests, such as judging slight changes in the pitch and duration of sounds at the same time, they beat the limit by up to a factor of 13. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17775 - Posted: 02.09.2013

By C. CLAIBORNE RAY Q. Nearing 70, I have increasing difficulty hearing conversations, yet music in restaurants is too loud. Why? A. Age-related hearing loss, called presbycusis, is characterized by loss of hair cells in the base of the cochlea, or inner ear, that are attuned to capture and transmit high-frequency sounds, said Dr. Anil K. Lalwani, director of otology, neurotology and skull-base surgery at NewYork-Presbyterian Hospital/Columbia University Medical Center. Loss of high-frequency hearing leads to deterioration in the ability to distinguish words in conversation. Additionally, any noise in the environment leads to even greater loss in clarity of hearing. “Contrary to expectation, presbycusis is also associated with sensitivity to loud noises,” Dr. Lalwani said. “This is due to a poorly understood phenomenon called recruitment.” Normally, a specific sound frequency activates a specific population of hair cells located at a specific position within the cochlea. With hearing loss, this specificity is lost, and a much larger population of hair cells in the adjacent areas is “recruited” and also activated, producing sensitivity to noise. “Patients with presbycusis perceive an incremental increase in loudness to be much greater than those with normal hearing,” he said. “This explains why the elderly parent complains that ‘I am not deaf!’ ” when a son or daughter repeats a misheard sentence. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17760 - Posted: 02.05.2013

By James Gallagher Health and science reporter, BBC News A tiny "genetic patch" can be used to prevent a form of deafness which runs in families, according to animal tests. Patients with Usher syndrome have defective sections of their genetic code which cause problems with hearing, sight and balance. A study, published in the journal Nature Medicine, showed the same defects could be corrected in mice to restore some hearing. Experts said it was an "encouraging" start. There are many types of Usher syndrome tied to different errors in a patient's DNA - the blueprint for building every component of the body. One of those mutations runs in families descended from French settlers in North America. When they try to build a protein called hormonin, which is needed to form the tiny hairs in the ear that detect sound, they do not finish the job. It results in hearing loss at birth and has a similar effect in the eye where it causes a gradual loss of vision. Scientists at the Rosalind Franklin University of Medicine and Science, in Chicago in the US, designed a small strip of genetic material which attaches to the mutation and keeps the body's factories building the protein. There has been something of a flurry of developments in restoring hearing in the past year. BBC © 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17759 - Posted: 02.05.2013

by Elizabeth Devitt Birds may not have big brains, but they know how to navigate. They wing around town and across continents with amazing accuracy, while we watch and wonder. Biologists believe that sight, smell, and an internal compass all contribute to avian orienteering. But none of these skills completely explains how birds fly long distances or return home from places they've never been. A new study proposes that the animals use infrasound—low-level background noise in our atmosphere—to fly by "images" they hear. These acoustical maps may also explain how other creatures steer. Scientists have long considered infrasound as a navigational cue for birds. But until U.S. Geological Survey geophysicist Jonathan Hagstrum in Menlo Park, California, became intrigued by the unexplained loss of almost 60,000 pigeons during a race from France to England in 1997, no one pinpointed how the process worked. The race went bust when the birds' flight route crossed that of a Concorde jet, and Hagstrum wanted to know why. "When I realized the birds in that race were on the same flight path as the Concorde, I knew it had to be infrasound," he says. The supersonic plane laid down a sonic boom when most of the animals were flying across the English Channel. Normally, infrasound is generated when deep ocean waves send pressure waves reverberating into the land and atmosphere. Infrasound can come from other natural causes, such as earthquakes, or humanmade events, such as the acceleration of the Concorde. The long, slow waves move across vast distances. Although humans can't hear them, birds and other animals are able to tune in. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17746 - Posted: 02.02.2013

By Laura Sanders Older people with hearing loss may suffer faster rates of mental decline. People who have hearing trouble suffered meaningful impairments in memory, attention and learning about three years earlier than people with normal hearing, a study published online January 21 in JAMA Internal Medicine reveals. The finding bolsters the idea that hearing loss can have serious consequences for the brain, says Patricia Tun of Brandeis University in Waltham, Mass., who studies aging. “I’m hoping it will be a real wake-up call in terms of realizing the importance of hearing.” Compared with other senses, hearing is often overlooked, Tun says. “We are made to interact with language and to listen to each other, and it can have damaging effects if we don’t.” Frank Lin of Johns Hopkins School of Medicine and colleagues tested the hearing of 1,984 older adults. Most of the participants, who averaged 77 years old, showed some hearing loss — 1,162 volunteers had trouble hearing noises of less than 25 decibels, comparable to a whisper or rustling leaves. The volunteers’ deficits reflect the hearing loss in the general population: Over half of people older than 70 have trouble hearing. Over the next six years, these participants underwent mental evaluations that measured factors such as short-term memory, attention and the ability to quickly match numbers to symbols. Everybody got worse at the tasks as time wore on, but people with hearing loss had an especially sharp decline, the team found. On average, a substantial drop in performance would come about three years earlier to people with hearing loss. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17699 - Posted: 01.22.2013

by Jennifer Viegas The world’s largest archive of animal vocalizations and other nature sounds is now available online. This resource for students, filmmakers, scientists and curious wildlife aficionados took archivists a dozen years to assemble and make ready for the web. “In terms of speed and the breadth of material now accessible to anyone in the world, this is really revolutionary,” audio curator Greg Budney said in a press release, describing the milestone just achieved by the Macaulay Library archive at the Cornell Lab of Ornithology. “This is one of the greatest research and conservation resources at the Cornell Lab,” added Budney. “And through its digitization, we’ve swung the doors open on it in a way that wasn’t possible 10 or 20 years ago.” The collection goes way back to 1929. It contains nearly 150,000 digital audio recordings equaling more than 10 terabytes of data with a total run time of 7,513 hours. About 9,000 species are represented. Many are birds, but the collection also includes sounds of whales, elephants, frogs, primates and more. “Our audio collection is the largest and the oldest in the world,” explained Macaulay Library director Mike Webster. “Now, it’s also the most accessible. We’re working to improve search functions and create tools people can use to collect recordings and upload them directly to the archive. Our goal is to make the Macaulay Library as useful as possible for the broadest audience possible.” © 2013 Discovery Communications, LLC

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17697 - Posted: 01.19.2013

Search recordings by species: 135793 recordings found

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17696 - Posted: 01.19.2013

by Gretchen Vogel All you graying, half-deaf Def Leppard fans, listen up. A drug applied to the ears of mice deafened by noise can restore some hearing in the animals. By blocking a key protein, the drug allows sound-sensing cells that are damaged by noise to regrow. The treatment isn't anywhere near ready for use in humans, but the advance at least raises the prospect of restoring hearing to some deafened people. When it comes to hearing, hair cells in the inner ear, so named for their bristlelike appearance, keep the process humming along, converting mechanical vibrations caused by sound waves into nerve impulses. Unfortunately for people, loud noises can overwork and destroy the cells. And once they're gone, they're gone: Birds and fish can regenerate the inner ear hair cells, but mammals cannot. Researchers have been looking for ways to reactivate the regenerative potential that other species enjoy. In 2005, scientists used gene therapy to prompt the growth of hair cells in the inner ears of adult guinea pigs, which restored some hearing. However, the drug approach would potentially be much easier to use in the clinic, says Albert Edge, a stem cell biologist at the Massachusetts Eye and Ear Infirmary in Boston. He and his colleagues had previously found that a class of drugs called gamma-secretase inhibitors could prompt the growth of hair cells from inner ear stem cells growing in the lab. The lab also showed that the drugs worked by blocking the signaling of the Notch protein, which helps determine which cells become hair cells and which become support cells during ear development. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17663 - Posted: 01.10.2013

By Diane Mapes The video touched millions: An 8-month old boy smiles with unabashed adoration at his mother as he hears her voice, seemingly for the first time, thanks to a new cochlear implant. Posted on YouTube in April of 2008, the video of "Jonathan's Cochlear Implant Activation" has received more than 3.6 million hits and thousands of comments from viewers, many clamoring for an update. Five-year-old Jonathan is “doing great,” according to his parents, Brigette and Mark Breaux of Houston, Texas. "He's in kindergarten and we're working on speech," Brigette, his 35-year-old stay-at-home mom, told TODAY.com. "He can hear everything that we say to him. It's of course artificial hearing but he can hear and understand what we're saying." After a bout with bacterial meningitis left him deaf, Jonathan Breaux regained hearing with the help of a cochlear implant, and is now a happy 5-year-old. "He's a flirt," adds Mark, a 36-year-old corporate controller. "He was chasing girls around the playground when Brigette went to see him for his class party. He's a handful." © 2013 NBCNews.com

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17656 - Posted: 01.07.2013