Links for Keyword: Hearing

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 722

Oliver Wainwright Some whisper gently into the microphone, while tapping their nails along the spine of a book. Others take a bar of soap and slice it methodically into tiny cubes, letting the pieces clatter into a plastic tray. There are those who dress up as doctors and pretend to perform a cranial nerve exam, and the ones who eat food as noisily as they can, recording every crunch and slurp in 3D stereo sound. To an outsider, the world of ASMR videos can be a baffling, kooky place. In a fast-growing corner of the internet, millions of people are watching each other tap, rattle, stroke and whisper their way through hours of homemade videos, with the aim of being lulled to sleep, or in the hope of experiencing “the tingles” – AKA, the autonomous sensory meridian response. “It feels like a rush of champagne bubbles at the top of your head,” says curator James Taylor-Foster. “There’s a mild sense of euphoria and a feeling of deep calm.” Taylor-Foster has spent many hours trawling the weirdest depths of YouTube in preparation for a new exhibition, Weird Sensation Feels Good, at ArkDes, Sweden’s national centre for architecture and design, on what he sees as one of the most important creative movements to emerge from the internet. (Though the museum has been closed due to the coronavirus pandemic, the show will be available to view online.) It will be the first major exhibition about ASMR, a term that was coined a decade ago when cybersecurity expert Jennifer Allen was looking for a word to describe the warm effervescence she felt in response to certain triggers. She had tried searching the internet for things like “tingling head and spine” or “brain orgasm”. In 2009, she hit upon a post on a health message board titled WEIRD SENSATION FEELS GOOD. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Higher Cognition
Link ID: 27169 - Posted: 04.04.2020

Jon Hamilton A song fuses words and music. Yet the human brain can instantly separate a song's lyrics from its melody. And now scientists think they know how this happens. A team led by researchers at McGill University reported in Science Thursday that song sounds are processed simultaneously by two separate brain areas – one in the left hemisphere and one in the right. "On the left side you can decode the speech content but not the melodic content, and on the right side you can decode the melodic content but not the speech content," says Robert Zatorre, a professor at McGill University's Montreal Neurological Institute. The finding explains something doctors have observed in stroke patients for decades, says Daniela Sammler, a researcher at the Max Planck Institute for Cognition and Neurosciences in Leipzig, Germany, who was not involved in the study. "If you have a stroke in the left hemisphere you are much more likely to have a language impairment than if you have a stroke in the right hemisphere," Sammler says. Moreover, brain damage to certain areas of the right hemisphere can affect a person's ability to perceive music. By subscribing, you agree to NPR's terms of use and privacy policy. NPR may share your name and email address with your NPR station. See Details. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. The study was inspired by songbirds, Zatorre says. Studies show that their brains decode sounds using two separate measures. One assesses how quickly a sound fluctuates over time. The other detects the frequencies in a sound. © 2020 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27082 - Posted: 02.28.2020

By Jillian Kramer Scientists often test auditory processing in artificial, silent settings, but real life usually comes with a background of sounds like clacking keyboards, chattering voices and car horns. Recently researchers set out to study such processing in the presence of ambient sound—specifically the even, staticlike hiss of white noise. Their result is counterintuitive, says Tania Rinaldi Barkat, a neuroscientist at the University of Basel: instead of impairing hearing, a background of white noise made it easier for mice to differentiate between similar tones. Barkat is senior author of the new study, published last November in Cell Reports. It is easy to distinguish notes on opposite ends of a piano keyboard. But play two side by side, and even the sharpest ears might have trouble telling them apart. This is because of how the auditory pathway processes the simplest sounds, called pure frequency tones: neurons close together respond to similar tones, but each neuron responds better to one particular frequency. The degree to which a neuron responds to a certain frequency is called its tuning curve. The researchers found that playing white noise narrowed neurons’ frequency tuning curves in mouse brains. “In a simplified way, white noise background—played continuously and at a certain sound level—decreases the response of neurons to a tone played on top of that white noise,” Barkat says. And by reducing the number of neurons responding to the same frequency at the same time, the brain can better distinguish between similar sounds. © 2020 Scientific American,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27074 - Posted: 02.26.2020

By Kim Tingley Hearing loss has long been considered a normal, and thus acceptable, part of aging. It is common: Estimates suggest that it affects two out of three adults age 70 and older. It is also rarely treated. In the U.S., only about 14 percent of adults who have hearing loss wear hearing aids. An emerging body of research, however, suggests that diminished hearing may be a significant risk factor for Alzheimer’s disease and other forms of dementia — and that the association between hearing loss and cognitive decline potentially begins at very low levels of impairment. In November, a study published in the journal JAMA Otolaryngology — Head and Neck Surgery examined data on hearing and cognitive performance from more than 6,400 people 50 and older. Traditionally, doctors diagnose impairment when someone experiences a loss in hearing of at least 25 decibels, a somewhat arbitrary threshold. But for the JAMA study, researchers included hearing loss down to around zero decibels in their analysis and found that they still predicted correspondingly lower scores on cognitive tests. “It seemed like the relationship starts the moment you have imperfect hearing,” says Justin Golub, the study’s lead author and an ear, nose and throat doctor at the Columbia University Medical Center and NewYork-Presbyterian. Now, he says, the question is: Does hearing loss actually cause the cognitive problems it has been associated with and if so, how? Preliminary evidence linking dementia and hearing loss was published in 1989 by doctors at the University of Washington, Seattle, who compared 100 patients with Alzheimer’s-like dementia with 100 demographically similar people without it and found that those who had dementia were more likely to have hearing loss, and that the extent of that loss seemed to correspond with the degree of cognitive impairment. But that possible connection wasn’t rigorously investigated until 2011, when Frank Lin, an ear, nose and throat doctor at Johns Hopkins School of Medicine, and colleagues published the results of a longitudinal study that tested the hearing of 639 older adults who were dementia-free and then tracked them for an average of nearly 12 years, during which time 58 had developed Alzheimer’s or another cognitive impairment. They discovered that a subject’s likelihood of developing dementia increased in direct proportion to the severity of his or her hearing loss at the time of the initial test. The relationship seems to be “very, very linear,” Lin says, meaning that the greater the hearing deficit, the greater the risk a person will develop the condition. © 2020 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 27057 - Posted: 02.20.2020

By Jane E. Brody Every now and then I write a column as much to push myself to act as to inform and motivate my readers. What follows is a prime example. Last year in a column entitled “Hearing Loss Threatens Mind, Life and Limb,” I summarized the current state of knowledge about the myriad health-damaging effects linked to untreated hearing loss, a problem that afflicts nearly 38 million Americans and, according to two huge recent studies, increases the risk of dementia, depression, falls and even cardiovascular diseases. Knowing that my own hearing leaves something to be desired, the research I did for that column motivated me to get a proper audiology exam. The results indicated that a well-fitted hearing aid could help me hear significantly better in the movies, theater, restaurants, social gatherings, lecture halls, even in the locker room where the noise of hair dryers, hand dryers and swimsuit wringers often challenges my ability to converse with my soft-spoken friends. That was six months ago, and I’ve yet to go back to get that recommended hearing aid. Now, though, I have a new source of motivation. A large study has documented that even among people with so-called normal hearing, those with only slightly poorer hearing than perfect can experience cognitive deficits. That means a diminished ability to get top scores on standardized tests of brain function, like matching numbers with symbols within a specified time period. But while you may never need or want to do that, you most likely do want to maximize and maintain cognitive function: your ability to think clearly, plan rationally and remember accurately, especially as you get older. While under normal circumstances, cognitive losses occur gradually as people age, the wisest course may well be to minimize and delay them as long as possible and in doing so, reduce the risk of dementia. Hearing loss is now known to be the largest modifiable risk factor for developing dementia, exceeding that of smoking, high blood pressure, lack of exercise and social isolation, according to an international analysis published in The Lancet in 2017. © 2019 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26923 - Posted: 12.30.2019

By Carolyn Gramling Exceptionally preserved skulls of a mammal that lived alongside the dinosaurs may be offering scientists a glimpse into the evolution of the middle ear. The separation of the three tiny middle ear bones — known popularly as the hammer, anvil and stirrup — from the jaw is a defining characteristic of mammals. The evolutionary shift of those tiny bones, which started out as joints in ancient reptilian jaws and ultimately split from the jaw completely, gave mammals greater sensitivity to sound, particularly at higher frequencies (SN: 3/20/07). But finding well-preserved skulls from ancient mammals that can help reveal the timing of this separation is a challenge. Now, scientists have six specimens — four nearly complete skeletons and two fragmented specimens — of a newly described, shrew-sized critter dubbed Origolestes lii that lived about 123 million years ago. O. lii was part of the Jehol Biota, an ecosystem of ancient wetlands-dwellers that thrived between 133 million and 120 million years ago in what’s now northeastern China. The skulls on the nearly complete skeletons were so well-preserved that they were able to be examined in 3-D, say paleontologist Fangyuan Mao of the Chinese Academy of Sciences in Beijing and colleagues. That analysis suggests that O. lii’s middle ear bones were fully separated from its jaw, the team reports online December 5 in Science. Fossils from an older, extinct line of mammals have shown separated middle ear bones, but this newfound species would be the first of a more recent lineage to exhibit this evolutionary advance. © Society for Science & the Public 2000–2019

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26880 - Posted: 12.07.2019

By Jade Wu What do the sounds of whispering, crinkling paper, and tapping fingernails have in common? What about the sight of soft paint brushes on skin, soap being gently cut to pieces, and hand movements like turning the pages of a book? Well, if you are someone who experiences the autonomous sensory meridian response—or ASMR, for short—you may recognize these seemingly ordinary sounds and sights as “triggers” for the ASMR experience. No idea what I’m talking about? Don’t worry, you’re actually in the majority. Most people, myself included, aren’t affected by these triggers. But what happens to those who are? What is the ASMR experience? It’s described as a pleasantly warm and tingling sensation that starts on the scalp and moves down the neck and spine. ASMR burst onto the Internet scene in 2007, according to Wikipedia, when a woman with the username “okaywhatever” described her experience of ASMR sensations in an online health discussion forum. At the time, there was no name for this weird phenomenon. But by 2010, someone called Jennifer Allen had named the experience, and from there, ASMR became an Internet sensation. Today, there are hundreds of ASMR YouTubers who collectively post over 200 videos of ASMR triggers per day, as reported by a New York Times article in April, 2019. Some ASMR YouTubers have become bona fide celebrities with ballooning bank accounts, millions of fans, and enough fame to be stopped on the street for selfies. There’s been some controversy. Some people doubt whether this ASMR experience is “real,” or just the result of recreational drugs or imagined sensations. Some have chalked the phenomenon up to a symptom of loneliness among Generation Z, who get their dose of intimacy from watching strangers pretend to do their makeup without having to interact with real people. Some people are even actively put off by ASMR triggers. One of my listeners, Katie, said that most ASMR videos just make her feel agitated. But another listener, Candace, shared that she has been unknowingly chasing ASMR since she was a child watching BBC. © 2019 Scientific American

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26873 - Posted: 12.05.2019

Adam Miller · CBC News · New research is shedding light on how the brain interacts with music. It also highlights how challenging it is to study the issue effectively due to the highly personalized nature of how we interpret it. "Music is very subjective," says Dr. Daniel Levitin, a professor of neuroscience and music at McGill University in Montreal and author of the bestselling book This is Your Brain on Music. "People have their own preferences and their own experience and to some extent baggage that they bring to all of this — it is challenging." Levitin says there are more researchers studying the neurological effects of music now than ever before. From 1998 to 2008 there were only four media reports of evidence-based uses of music in research, while from 2009 to 2019 there were 185, Levitin said in a recent paper for the journal Music and Medicine. It's a "great time for music and brain research" because more people are well-trained and skilled at conducting rigorous experiments, according to Levitin. Emerging research reveals challenges A new study by researchers in Germany and Norway used artificial intelligence to analyze levels of "uncertainty" and "surprise" in 80,000 chords from 745 commercially successful pop songs on the U.S. Billboard charts. The research, published Thursday in Current Biology, found that chords provided more pleasure to the listener both when there is uncertainty in anticipating what comes next, and from the surprise the music elicits when the chords deviate from expectations. ©2019 CBC/Radio-Canada

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26807 - Posted: 11.09.2019

By Jon Cohen On a lightly snowing Sunday evening, a potential participant in Denis Rebrikov’s controversial plans to create gene-edited babies meets with me at a restaurant in a Moscow suburb. She does not want to be identified beyond her patronymic, Yevgenievna. We sit at a corner table in an empty upstairs section of the restaurant while live Georgian music plays downstairs. Yevgenievna, in her late 20s, cannot hear it—or any music. She has been deaf since birth. But with the help of a hearing aid that’s linked to a wireless microphone, which she places on the table, she can hear some sounds, and she is adept at reading lips. She speaks to me primarily in Russian, through a translator, but she is also conversant in English. Yevgenievna and her husband, who is partially deaf, want to have children who will not inherit hearing problems. There is nothing illicit about our discussion: Russia has no clear regulations prohibiting Rebrikov’s plan to correct the deafness mutation in an in vitro fertilization (IVF) embryo. But Yevgenievna is uneasy about publicity. “We were told if we become the first couple to do this experiment we’ll become famous, and HBO already tried to reach me,” Yevgenievna says. “I don’t want to be well known like an actor and have people bother me.” She is also deeply ambivalent about the procedure itself, a pioneering and potentially risky use of the CRISPR genome editor. The couple met on vk.com, a Russian Facebook of sorts, in a chat room for people who are hearing impaired. Her husband could hear until he was 15 years old, and still gets by with hearing aids. They have a daughter—Yevgenievna asks me not to reveal her age—who failed a hearing test at birth. Doctors initially believed it was likely a temporary problem produced by having a cesarean section, but 1 month later, her parents took her to a specialized hearing clinic. “We were told our daughter had zero hearing,” Yevgenievna says. “I was shocked, and we cried.” © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26732 - Posted: 10.22.2019

By Kelly Servick The brain has a way of repurposing unused real estate. When a sense like sight is missing, corresponding brain regions can adapt to process new input, including sound or touch. Now, a study of blind people who use echolocation—making clicks with their mouths to judge the location of objects when sound bounces back—reveals a degree of neural repurposing never before documented. The research shows that a brain area normally devoted to the earliest stages of visual processing can use the same organizing principles to interpret echoes as it would to interpret signals from the eye. In sighted people, messages from the retina are relayed to a region at the back of the brain called the primary visual cortex. We know the layout of this brain region corresponds to the layout of physical space around us: Points that are next to each other in our environment project onto neighboring points on the retina and activate neighboring points in the primary visual cortex. In the new study, researchers wanted to know whether blind echolocators used this same type of spatial mapping in the primary visual cortex to process echoes. The researchers asked blind and sighted people to listen to recordings of a clicking sound bouncing off an object placed at different locations in a room while they lay in a functional magnetic resonance imaging scanner. The researchers found that expert echolocators—unlike sighted people and blind people who don’t use echolocation—showed activation in the primary visual cortex similar to that of sighted people looking at visual stimuli. © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 26663 - Posted: 10.02.2019

Ian Sample Science editor When Snowball the sulphur-crested cockatoo revealed his first dance moves a decade ago he became an instant sensation. The foot-tapping, head-bobbing bird boogied his way on to TV talkshows and commercials and won an impressive internet audience. Block-rocking beaks: Snowball the cockatoo – reviewed by our dance critic Read more But that was merely the start. A new study of the prancing parrot points to a bird at the peak of his creative powers. In performances conducted from the back of an armchair, Snowball pulled 14 distinct moves – a repertoire that would put many humans to shame. Footage of Snowball in action shows him smashing Another One Bites the Dust by Queen and Cyndi Lauper’s Girls Just Wanna Have Fun with a dazzling routine of head-bobs, foot-lifts, body-rolls, poses and headbanging. In one move, named the Vogue, Snowball moves his head from one side of a lifted foot to another. “We were amazed,” said Aniruddh Patel, a psychology professor at Tufts University in Medford, Massachusetts. “There are moves in there, like the Madonna Vogue move, that I just can’t believe.” Advertisement “It seems that dancing to music isn’t purely a product of human culture. The fact that we see this in another animal suggests that if you have a brain with certain cognitive and neural capacities, you are predisposed to dance,” he added. It all started, as some things must, with the Backstreet Boys. In 2008, Patel, who has long studied the origins of musicality, watched a video on the internet of Snowball dancing in time to the band’s track Everybody. He contacted Irena Schulz, who owned the bird shelter where Snowball lived, and with her soon launched a study of Snowball’s dancing prowess. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26400 - Posted: 07.09.2019

By Matthew Hutson LONG BEACH, CALIFORNIA—Spies may soon have another tool to carry out their shadowy missions: a new device that uses sound to “see” around corners. David Lindell Previously, researchers developed gadgets that bounced light waves around corners to catch reflections and see things out of the line of sight. To see whether they could do something similar with sound, another group of scientists built a hardware prototype—a vertical pole adorned with off-the-shelf microphones and small car speakers. The speakers emitted a series of chirps, which bounced off a nearby wall at an angle before hitting a hidden object on another wall—a poster board cutout of the letter H. Scientists then moved their rig bit by bit, each time making more chirps, which bounced back the way they came, into the microphones. Using algorithms from seismic imaging, the system reconstructed a rough image of the letter H (above). The researchers also imaged a setup with the letters L and T and compared their acoustic results with an optical method. The optical method, which requires expensive equipment, failed to reproduce the more-distant L, and it took more than an hour, compared with just 4.5 minutes for the acoustic method. The researchers will present the work here Wednesday at the Computer Vision and Pattern Recognition conference. © 2019 American Association for the Advancement of Science

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26333 - Posted: 06.18.2019

Hannah Devlin Science correspondent A mind-controlled hearing aid that allows the wearer to focus on particular voices has been created by scientists, who say it could transform the ability of those with hearing impairments to cope with noisy environments. The device mimics the brain’s natural ability to single out and amplify one voice against background conversation. Until now, even the most advanced hearing aids work by boosting all voices at once, which can be experienced as a cacophony of sound for the wearer, especially in crowded environments. Nima Mesgarani, who led the latest advance at Columbia University in New York, said: “The brain area that processes sound is extraordinarily sensitive and powerful. It can amplify one voice over others, seemingly effortlessly, while today’s hearing aids still pale in comparison.” This can severely hinder a wearer’s ability to join in conversations, making busy social occasions particularly challenging. Scientists have been working for years to resolve this problem, known as the cocktail party effect. The brain-controlled hearing aid appears to have cracked the problem using a combination of artificial intelligence and sensors designed to monitor the listener’s brain activity. The hearing aid first uses an algorithm to automatically separate the voices of multiple speakers. It then compares these audio tracks to the brain activity of the listener. Previous work by Mesgarani’s lab found that it is possible to identify which person someone is paying attention to, as their brain activity tracks the sound waves of that voice most closely. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26247 - Posted: 05.18.2019

By Maggie Koerth-Baker Where is the loudest place in America? You might think New York City, or a major airport hub, or a concert you have suddenly become too old to appreciate. But that depends on what kind of noise you’re measuring. Sound is actually a physical thing. What we perceive as noise is the result of air molecules bumping into one another, like a Newton’s cradle toy. That movement eventually reaches our eardrums, which turn that tiny wiggle into an audible signal. But human ears can’t convert all molecular motion to sound. Sometimes the particles are jostling one another too fast. Sometimes they’re too slow. Sometimes, the motion is just happening in the wrong medium — through the Earth, say, instead of through the air. And when you start listening for the sounds we can’t hear, the loudest place in America can end up being right under your feet. Scientists have tools that can detect these “silent” waves, and they’ve found a lot of noise happening all over the U.S. Those noises are made by the cracking of rocks deep in the Earth along natural fault lines and the splashing of whitecaps on the ocean. But they’re also made by our factories, power plants, mines and military. “Any kind of mechanical process is going to generate energetic waves, said Omar Marcillo, staff scientist at Los Alamos National Laboratory. “Some of that goes through the atmosphere as acoustic waves, and some goes through the ground as seismic waves.” Marcillo’s work focuses on the seismic. © 2019 ABC News Internet Ventures.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26227 - Posted: 05.11.2019

/ By Jed Gottlieb In 1983, The New York Times published a bombshell report about President Ronald Reagan: Starkey Laboratories had fitted the President, then 72, with a hearing aid. The news was welcomed by health professionals who reckoned it could help to reduce the stigma associated with hearing loss. At the time, one in three people over the age of 60 was thought to have hearing problems, though only around 20 percent who needed hearing aids used them. “The way I do the math, a third of all adults have unaddressed hearing issues. That’s lot of people.” Indeed, Reagan’s handlers knew too well that the revelation risked making the president look like a feeble old man — and worse, someone ill-equipped to run the most powerful nation on earth. “Among Presidential advisers,” The New York Times noted, “Mr. Reagan’s use of a hearing aid revived speculation on whether his age would be an issue if he seeks re-election next year.” Reagan won re-election, of course, but nearly 40 years later, negative perceptions persist — and health advocates are more concerned than ever. Hearing loss, they say, is not just a functional disability affecting a subset of aging adults. With population growth and a boom in the global elderly population, the World Health Organization (WHO) now estimates that by 2050, more than 900 million people will have disabling hearing loss. A 2018 study of 3,316 children aged nine to 11 meanwhile, found that 14 percent already had signs of hearing loss themselves. While not conclusive, the study linked the loss to the rise of portable music players. Copyright 2019 Undark

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 26124 - Posted: 04.09.2019

Emily Conover Lasers can send sounds straight to a listener’s ear, like whispering a secret from afar. Using a laser tuned to interact with water vapor in the air, scientists created sounds in a localized spot that were loud enough to be picked up by human hearing if aimed near a listener’s ear. It’s the first time such a technique can be used safely around humans, scientists from MIT Lincoln Laboratory in Lexington, Mass., report in the Feb. 1 Optics Letters. At the wavelengths and intensities used, the laser won’t cause burns if it grazes eyes or skin. The scientists tested out the setup on themselves in the laboratory, putting their ears near the beam to pick up the sound. “You move your head around, and there’s a couple-inch zone where you go ‘Oh, there it is!’… It’s pretty cool,” says physicist Charles Wynn. The researchers also used microphones to capture and analyze the sounds. The work relies on a phenomenon called the photoacoustic effect, in which pulses of light are converted into sound when absorbed by a material, in this case, water vapor. Based on this effect, the researchers used two different techniques to make the sounds. The first technique, which involves rapidly ramping the intensity of the laser beam up and down, can transmit voices and songs. “You can hear the music really well; you can understand what people are saying,” says physicist Ryan Sullenberger, who coauthored the study along with Wynn and physicist Sumanth Kaushik. |© Society for Science & the Public 2000 - 2019.

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25925 - Posted: 02.02.2019

By Jane E. Brody The earsplitting sound of ambulance sirens in New York City is surely hastening the day when I and many others repeatedly subjected to such noise will be forced to get hearing aids. I just hope this doesn’t happen before 2021 or so when these devices become available over-the-counter and are far less expensive and perhaps more effective than they are now. Currently, hearing aids and accompanying services are not covered by medical insurance, Medicare included. Such coverage was specifically excluded when the Medicare law was passed in 1965, a time when hearing loss was not generally recognized as a medical issue and hearing aids were not very effective, said Dr. Frank R. Lin, who heads the Cochlear Center for Hearing and Public Health at the Johns Hopkins Bloomberg School of Public Health. Now a growing body of research by his colleagues and others is linking untreated hearing loss to several costly ills, and the time has come for hearing protection and treatment of hearing loss to be taken much more seriously. Not only is poor hearing annoying and inconvenient for millions of people, especially the elderly. It is also an unmistakable health hazard, threatening mind, life and limb, that could cost Medicare much more than it would to provide hearing aids and services for every older American with hearing loss. Currently, 38.2 million Americans aged 12 or older have hearing loss, a problem that becomes increasingly common and more severe with age. More than half of people in their 70s and more than 80 percent in their 80s have mild to moderate hearing loss or worse, according to tests done by the National Health and Nutrition Examination Survey between 2001 and 2010. Two huge new studies have demonstrated a clear association between untreated hearing loss and an increased risk of dementia, depression, falls and even cardiovascular diseases. In a significant number of people, the studies indicate, uncorrected hearing loss itself appears to be the cause of the associated health problem. © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: Development of the Brain
Link ID: 25834 - Posted: 01.01.2019

By: Robert Zatorre, Ph.D. Human beings seem to have innate musicality. That is, the capacity to understand and derive pleasure from complex musical patterns appears to be culturally universal.1 Musicality is expressed very early in development.2 In this sense, music may be compared to speech—the other cognitively interesting way that we use sound. But whereas speech is most obviously important for communicating propositions or concepts, obtaining such knowledge, this is not the primary function of music. Rather, it is music’s power to communicate emotions, moods, or affective mental states that seems beneficial to our quality of life. Which brings us to the question that forms the title of this article: why do we love music? On its face, there is no apparent reason why a sequence or pattern of sounds that has no specific propositional meaning should elicit any kind of pleasurable response. Yet music is widely considered amongst our greatest joys.3 Where does this phenomenon come from? There are several approaches to this question. A musicologist might have a very different answer than a social scientist. Since I’m a neuroscientist, I would like to address it from that perspective—recognizing that other perspectives may also offer valuable insights. An advantage of neuroscience is that we can relate our answer to established empirical findings and draw from two especially relevant domains: the neuroscience of auditory perception and of the reward system. To give away the punch line of my article, I believe that music derives its power from an interaction between these two systems, the first of which allows us to analyze sound patterns and make predictions about them, and the second of which evaluates the outcomes of these predictions and generates positive (or negative) emotions depending on whether the expectation was met, not met, or exceeded. © 2018 The Dana Foundation

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 25832 - Posted: 01.01.2019

Jennifer Leman Some moths aren’t so easy for bats to detect. The cabbage tree emperor moth has wings with tiny scales that absorb sound waves sent out by bats searching for food. That absorption reduces the echoes that bounce back to bats, allowing Bunaea alcinoe to avoid being so noticeable to the nocturnal predators, researchers report online November 12 in the Proceedings of the National Academy of Sciences. “They have this stealth coating on their body surfaces which absorbs the sound,” says study coauthor Marc Holderied, a bioacoustician at the University of Bristol in England. “We now understand the mechanism behind it.” Bats sense their surroundings using echolocation, sending out sound waves that bounce off objects and return as echoes picked up by the bats’ supersensitive ears (SN: 9/30/17, p. 22). These moths, without ears that might alert them to an approaching predator, have instead developed scales of a size, shape and thickness suited to absorbing ultrasonic sound frequencies used by bats, the researchers found. The team shot ultrasonic sound waves at a single, microscopic scale and observed it transferring sound wave energy into movement. The scientists then simulated the process with a 3-D computer model that showed the scale absorbing up to 50 percent of the energy from sound waves. What’s more, it isn’t just wings that help such earless moths evade bats. Other moths in the same family as B. alcinoe also have sound-absorbing fur, the same researchers report online October 18 in the Journal of the Acoustical Society of America. |© Society for Science & the Public 2000 - 2018

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25679 - Posted: 11.14.2018

By Jane E. Brody Jane R. Madell, a pediatric audiology consultant and speech-language pathologist in Brooklyn, N.Y., wants every parent with a child who is born hearing-impaired to know that it is now possible for nearly all children with severe hearing loss to learn to listen and speak as if their hearing were completely normal. “Children identified with hearing loss at birth and fitted with technology in the first weeks of life blend in so well with everyone else that people don’t realize there are so many deaf children,” she told me. With the appropriate hearing device and auditory training for children and their caregivers during the preschool years, even those born deaf “will have the ability to learn with their peers when they start school,” Dr. Madell said. “Eighty-five percent of such children are successfully mainstreamed. Parents need to know that listening and spoken language is a possibility for their children.” Determined to get this message out to all who learn their children lack normal hearing, Dr. Madell and Irene Taylor Brodsky produced a documentary, “The Listening Project,” to demonstrate the enormous help available through modern hearing assists and auditory training. Among the “stars” in the film, all of whom grew up deaf or severely hearing-impaired, are Dr. Elizabeth Bonagura, an obstetrician-gynecologist and surgeon; Jake Spinowitz, a musician; Joanna Lippert, a medical social worker, and Amy Pollick, a psychologist. All started out with hearing aids that helped them learn to speak and understand spoken language. But now all have cochlear implants that, as Ms. Lippert put it, “really revolutionized my world” when, at age 11, she became the first preteen to get a cochlear implant at New York University Medical Center. © 2018 The New York Times Company

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 25541 - Posted: 10.08.2018