Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 689

Alejandra Marquez Janse & Christopher Intagliata Imagine you're moving to a new country on the other side of the world. Besides the geographical and cultural changes, you will find a key difference will be the language. But will your pets notice the difference? It was a question that nagged at Laura Cuaya, a brain researcher at the Neuroethology of Communication Lab at at Eötvös Loránd University in Budapest. "When I moved from Mexico to Hungary to start my post-doc research, all was new for me. Obviously, here, people in Budapest speak Hungarian. So you've had a different language, completely different for me," she said. The language was also new to her two dogs: Kun Kun and Odín. "People are super friendly with their dogs [in Budapest]. And my dogs, they are interested in interacting with people," Cuaya said. "But I wonder, did they also notice people here ... spoke a different language?" Cuaya set out to find the answer. She and her colleagues designed an experiment with 18 volunteer dogs — including her two border collies — to see if they could differentiate between two languages. Kun Kun and Odín were used to hearing Spanish; the other dogs Hungarian. The dogs sat still within an MRI machine, while listening to an excerpt from the story The Little Prince. They heard one version in Spanish, and another in Hungarian. Then the scientists analyzed the dogs' brain activity. © 2022 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28145 - Posted: 01.08.2022

Jon Hamilton When baby mice cry, they do it to a beat that is synchronized to the rise and fall of their own breath. It's a pattern that researchers say could help explain why human infants can cry at birth — and how they learn to speak. Mice are born with a cluster of cells in the brainstem that appears to coordinate the rhythms of breathing and vocalizations, a team reports in the journal Neuron. If similar cells exist in human newborns, they could serve as an important building block for speech: the ability to produce one or many syllables between each breath. The cells also could explain why so many human languages are spoken at roughly the same tempo. "This suggests that there is a hardwired network of neurons that is fundamental to speech," says Dr. Kevin Yackle, the study's senior author and a researcher at the University of California, San Francisco. Scientists who study human speech have spent decades debating how much of our ability is innate and how much is learned. The research adds to the evidence that human speech relies — at least in part — on biological "building blocks" that are present from birth, says David Poeppel, a professor of psychology and neural science at New York University who was not involved in the study. But "there is just a big difference between a mouse brain and a human brain," Poeppel says. So the human version of this building block may not look the same. © 2022 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 8: Hormones and Sex
Link ID: 28144 - Posted: 01.08.2022

Daisy Yuhas Billions of people worldwide speak two or more languages. (Though the estimates vary, many sources assert that more than half of the planet is bilingual or multilingual.) One of the most common experiences for these individuals is a phenomenon that experts call “code switching,” or shifting from one language to another within a single conversation or even a sentence. This month Sarah Frances Phillips, a linguist and graduate student at New York University, and her adviser Liina Pylkkänen published findings from brain imaging that underscore the ease with which these switches happen and reveal how the neurological patterns that support this behavior are very similar in monolingual people. The new study reveals how code switching—which some multilingual speakers worry is “cheating,” in contrast to sticking to just one language—is normal and natural. Phillips spoke with Mind Matters editor Daisy Yuhas about these findings and why some scientists believe bilingual speakers may have certain cognitive advantages. Can you tell me a little bit about what drew you to this topic? I grew up in a bilingual household. My mother is from South Korea; my dad is African-American. So I grew up code switching a lot between Korean and English, as well as different varieties of English, such as African-American English and the more mainstream, standardized version. When you spend a lot of time code switching, and then you realize that this is something that is not well understood from a linguistic perspective, nor from a neurobiological perspective, you realize, “Oh, this is open territory.” © 2021 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28095 - Posted: 12.01.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 28058 - Posted: 10.30.2021

Jordana Cepelewicz Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities. According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words. But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds. Simons Foundation All Rights Reserved © 2021

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28054 - Posted: 10.27.2021

By Sam Roberts Washoe was 10 months old when her foster parents began teaching her to talk, and five months later they were already trumpeting her success. Not only had she learned words; she could also string them together, creating expressions like “water birds” when she saw a pair of swans and “open flower” to gain admittance to a garden. Washoe was a chimpanzee. She had been born in West Africa, probably orphaned when her mother was killed, sold to a dealer, flown to the United States for use of testing by the Air Force and adopted by R. Allen Gardner and his wife, Beatrix. She was raised as if she were a human child. She craved oatmeal with onions and pumpkin pudding. “The object of our research was to learn how much chimps are like humans,” Professor Gardner told Nevada Today, a University of Nevada publication, in 2007. “To measure this accurately, chimps would be needed to be raised as human children, and to do that, we needed to share a common language.” Washoe ultimately learned some 200 words, becoming what researchers said was the first nonhuman to communicate using sign language developed for the deaf. Professor Gardner, an ethologist who, with his wife, raised the chimpanzee for nearly five years, died on Aug. 20 at his ranch near Reno, Nev. He was 91. His death was announced by the University of Nevada, Reno, where he had joined the faculty in 1963 and conducted his research until he retired in 2010. When scientific journals reported in 1967 that Washoe (pronounced WA-sho), named after a county in Nevada, had learned to recognize and use multiple gestures and expressions in sign language, the news electrified the world of psychologists and ethologists who study animal behavior. © 2021 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 28013 - Posted: 10.02.2021

Christie Wilcox If it walks like a duck and talks like a person, it’s probably a musk duck (Biziura lobata)—the only waterfowl species known that can learn sounds from other species. The Australian species’ facility for vocal learning had been mentioned anecdotally in the ornithological literature; now, a paper published September 6 in Philosophical Transactions of the Royal Society B reviews and discusses the evidence, which includes 34-year-old recordings made of a human-reared musk duck named Ripper engaging in an aggressive display while quacking “you bloody fool.” Ripper quacking "you bloody fool" while being provoked by a person separated from him by a fence The Scientist spoke with the lead author on the paper, Leiden University animal behavior researcher Carel ten Cate, to learn more about these unique ducks and what their unexpected ability reveals about the evolution of vocal learning. The Scientist: What is vocal learning? Carel ten Cate: Vocal learning, as it is used in this case, is that animals and humans, they learn their sounds from experience. So they learn from what they hear around them, which will usually be the parents, but it can also be other individuals. And if they don’t get that sort of exposure, then they will be unable to produce species-specific vocalizations, or in the human case, speech sounds and proper spoken language. © 1986–2021 The Scientist.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27987 - Posted: 09.13.2021

By Carolyn Wilke Babies may laugh like some apes a few months after birth before transitioning to chuckling more like human adults, a new study finds. Laughter links humans to great apes, our evolutionary kin (SN: 6/4/09). Human adults tend to laugh while exhaling (SN: 6/10/15), but chimpanzees and bonobos mainly laugh in two ways. One is like panting, with sound produced on both in and out breaths, and the other has outbursts occurring on exhales, like human adults. Less is known about how human babies laugh. So Mariska Kret, a cognitive psychologist at Leiden University in the Netherlands, and colleagues scoured the internet for videos with laughing 3- to 18-month-olds, and asked 15 speech sound specialists and thousands of novices to judge the babies’ laughs. After evaluating dozens of short audio clips, experts and nonexperts alike found that younger infants laughed during inhalation and exhalation, while older infants laughed more on the exhale. That finding suggests that infants’ laughter becomes less apelike with age, the researchers report in the September Biology Letters. Humans start to laugh around 3 months of age, but early on, “it hasn’t reached its full potential,” Kret says. Both babies’ maturing vocal tracts and their social interactions may influence the development of the sounds, the researchers say.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 13: Memory and Learning
Link ID: 27983 - Posted: 09.11.2021

By Jonathan Lambert At least 65 million years of evolution separate humans and greater sac-winged bats, but these two mammals share a key feature of learning how to speak: babbling. Just as human infants babble their way from “da-da-da-da” to “Dad,” wild bat pups (Saccopteryx bilineata) learn the mating and territorial songs of adults by first babbling out the fundamental syllables of the vocalizations, researchers report in the Aug. 20 Science. These bats now join humans as the only clear examples of mammals who learn to make complex vocalizations through babbling. “This is a hugely important step forward in the study of vocal learning,” says Tecumseh Fitch, an evolutionary biologist at the University of Vienna not involved in the new study. “These findings suggest that there are deep parallels between how humans and young bats learn to control their vocal apparatus,” he says. The work could enable future studies that might allow researchers to peer deeper into the brain activity that underpins vocal learning. Before complex vocalizations, whether words or mating songs, can be spoken or sung, vocalizers must learn to articulate the syllables that make up a species’s vocabulary, says Ahana Fernandez, an animal behavior biologist at the Museum für Naturkunde in Berlin. “Babbling is a way of practicing,” and honing those vocalizations, she says. The rhythmic, repetitive “ba-ba-ba’s” and “ga-ga-ga’s” of human infants may sound like gibberish, but they are necessary exploratory steps toward learning how to talk. Seeing whether babbling is required for any animal that learns complex vocalizations necessitates looking in other species. © Society for Science & the Public 2000–2021.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 27957 - Posted: 08.21.2021

Lydia Denworth Lee Reeves always wanted to be a veterinarian. When he was in high school in the Washington, D.C., suburbs, he went to an animal hospital near his house on a busy Saturday morning to apply for a job. The receptionist said the doctor was too busy to talk. But Reeves was determined and waited. Three and a half hours later, after all the dogs and cats had been seen, the veterinarian emerged and asked Reeves what he could do for him. Reeves, who has stuttered since he was three years old, had trouble answering. “I somehow struggled out the fact that I wanted the job and he asked me what my name was,” he says. “I couldn’t get my name out to save my life.” The vet finally reached for a piece of paper and had Reeves write down his name and add his phone number, but he said there was no job available. “I remember walking out of that clinic that morning thinking that essentially my life was over,” Reeves says. “Not only was I never going to become a veterinarian, but I couldn’t even get a job cleaning cages.” More than 50 years have passed. Reeves, who is now 72, has gone on to become an effective national advocate for people with speech impairments, but the frustration and embarrassment of that day are still vivid. They are also emblematic of the complicated experience that is stuttering. Technically, stuttering is a disruption in the easy flow of speech, but the physical struggle and the emotional effects that often go with it have led observers to wrongly attribute the condition to defects of the tongue or voice box, problems with cognition, emotional trauma or nervousness, forcing left-handed children to become right-handed, and, most unfortunately, poor parenting. Freudian psychiatrists thought stuttering represented “oral-sadistic conflict,” whereas the behavioralists argued that labeling a child a stutterer would exacerbate the problem. Reeves’s parents were told to call no attention to his stutter—wait it out, and it would go away. © 2021 Scientific American,

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27942 - Posted: 08.11.2021

By Jonathan Lambert When one naked mole-rat encounters another, the accent of their chirps might reveal whether they’re friends or foes. These social rodents are famous for their wrinkly, hairless appearance. But hang around one of their colonies for a while, and you’ll notice something else — they’re a chatty bunch. Their underground burrows resound with near-constant chirps, grunts, squeaks and squeals. Now, computer algorithms have uncovered a hidden order within this cacophony, researchers report in the Jan. 29 Science. These distinctive chirps, which pups learn when they’re young, help the mostly blind, xenophobic rodents discern who belongs, strengthening the bonds that maintain cohesion in these highly cooperative groups. “Language is really important for extreme social behavior, in humans, dolphins, elephants or birds,” says Thomas Park, a biologist at the University of Illinois Chicago who wasn’t involved in the study. This work shows naked mole-rats (Heterocephalus glaber) belong in those ranks as well, Park says. Naked mole-rat groups seem more like ant or termite colonies than mammalian societies. Every colony has a single breeding queen who suppresses the reproduction of tens to hundreds of nonbreeding worker rats that dig elaborate subterranean tunnels in search of tubers in eastern Africa (SN: 10/18/04). Food is scarce, and the rodents vigorously attack intruders from other colonies. While researchers have long noted the rat’s raucous chatter, few actually studied it. “Naked mole-rats are incredibly cooperative and incredibly vocal, and no one has really looked into how these two features influence one another,” says Alison Barker, a neuroscientist at the Max Delbrück Center for Molecular Medicine in Berlin. © Society for Science & the Public 2000–2021.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27673 - Posted: 01.30.2021

By Lisa Sanders, M.D. “You need to call an ambulance,” the familiar voice from her doctor’s office urged the frightened 59-year-old woman. “Or should I do it for you?” No, she replied shakily. I can do it. The woman looked down at the phone in her hand; there were two of them. She closed one eye and the second phone disappeared. Then she dialed 911. It had been a hellish few days. Five days earlier, she noticed that she was having trouble walking. Her legs couldn’t or wouldn’t follow her brain’s instructions. She had to take these ungainly baby steps to get anywhere. Her muscles felt weak; her feet were inert blocks. Her hands shook uncontrollably. She vomited half a dozen times a day. The week before, she decided to stop drinking, and she recognized the shaking and vomiting as part of that process. The trouble walking, that was new. But that’s not why she called her doctor. The previous day, she was driving home and was just a block away when suddenly there were two of everything. Stone-cold sober and seeing double. There were two dotted lines identifying the middle of her quiet neighborhood street in South Portland, Maine. Two sets of curbs in front of two sets of sidewalks. She stopped the car, rubbed her eyes and discovered that the second objects slid back into the first when one eye stayed covered. She drove home with her face crinkled in an awkward wink. At home, she immediately called her doctor’s office. They wanted to send an ambulance right then. But she didn’t have health insurance. She couldn’t afford either the ambulance or the hospital. She would probably be better by the next day, she told the young woman on the phone. But the next day was the same. And when she called the doctor’s office this time, the medical assistant’s suggestion that she call an ambulance made a lot more sense. The woman was embarrassed by the siren and flashing lights. Her neighbors would be worried. But she couldn’t deny the relief she felt as she watched the ambulance pull up. The E.M.T.s helped her to her feet and onto the stretcher, then drove her to nearby Northern Light Mercy Hospital. © 2020 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27607 - Posted: 12.05.2020

Amber Dance Gerald Maguire has stuttered since childhood, but you might not guess it from talking to him. For the past 25 years, he has been treating his disorder with antipsychotic medications not officially approved for the condition. Only with careful attention might you discern his occasional stumble on multisyllabic words like "statistically" and "pharmaceutical." Maguire has plenty of company: More than 70 million people worldwide, including about 3 million Americans, stutter — they have difficulty with the starting and timing of speech, resulting in halting and repetition. That number includes approximately 5 percent of children (many of whom outgrow the condition) and 1 percent of adults. Their numbers include presidential candidate Joe Biden, deep-voiced actor James Earl Jones, and actress Emily Blunt. Though they and many others, including Maguire, have achieved career success, stuttering can contribute to social anxiety and draw ridicule or discrimination. Maguire, a psychiatrist at the University of California, Riverside, has been treating people who stutter, and researching potential treatments, for decades. He's now embarking on a clinical trial of a new medication, ecopipam, that streamlined speech and improved quality of life in a small pilot study in 2019. Others, meanwhile, are delving into the root causes of stuttering. In past decades, therapists mistakenly attributed stuttering to defects of the tongue and voice box, to anxiety, trauma, or even poor parenting — and some still do. Yet others have long suspected that neurological problems might underlie stuttering, says J. Scott Yaruss, a speech-language pathologist at Michigan State University. The first data to back up that hunch came in 1991, when researchers reported altered blood flow in the brains of people who stuttered. Since then research has made it more apparent that stuttering is all in the brain. "We are in the middle of an absolute explosion of knowledge being developed about stuttering," Yaruss says. ® 2020 The Week Publications Inc.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27565 - Posted: 11.04.2020

By Benedict Carey For a couple of minutes on Thursday, the sprawling, virtual Democratic National Convention seemed to hold its collective breath as 13-year-old Brayden Harrington of Concord, N.H., addressed the nation from his bedroom, occasionally stumbling on his words. “I’m a regular kid,” he said into a home camera, and a recent meeting with the candidate “made me feel confident about something that has bothered me my whole life.” Joe Biden and Mr. Harrington have had to manage stuttering, and the sight of the teenager openly balking on several words, including “stutter,” was a striking reminder of how the speech disorder can play havoc with sociability, relationships, even identity. Movies like “The King’s Speech,” and books like Philip Roth’s “American Pastoral,” explore how consequential managing the disorder can be, just as Mr. Biden’s own story does. How many people stutter? The basic numbers are known: About one in 10 children will exhibit some evidence of a stutter — it usually starts between ages 2 and 7 — and 90 percent of them will grow out of it before adulthood. Around 1 percent of the population carries the speech problem for much of their lives. For reasons not understood, boys are twice as likely to stutter, and nearly four times as likely to continue doing so into adulthood. And it is often anxiety that triggers bursts of verbal stumbling — which, in turn, create a flood of self-conscious stress. When Mr. Harrington got stuck for a couple of seconds on the “s” in “stutter,” he turned his head and his eyes fluttered — an embodiment of physical and mental effort — before saying, “It is really amazing that someone like me could get advice from” a presidential candidate. About half of children who stutter are related to someone else who does, but it is impossible to predict who will develop the speech disorder. There are no genes for stuttering; and scientists do not know what might happen after conception, during development, that predisposes children to struggle with speaking in this way. © 2020 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 27431 - Posted: 08.22.2020

By Bruce Bower An aptitude for mentally stringing together related items, often cited as a hallmark of human language, may have deep roots in primate evolution, a new study suggests. In lab experiments, monkeys demonstrated an ability akin to embedding phrases within other phrases, scientists report June 26 in Science Advances. Many linguists regard this skill, known as recursion, as fundamental to grammar (SN: 12/4/05) and thus peculiar to people. But “this work shows that the capacity to represent recursive sequences is present in an animal that will never learn language,” says Stephen Ferrigno, a Harvard University psychologist. Recursion allows one to elaborate a sentence such as “This pandemic is awful” into “This pandemic, which has put so many people out of work, is awful, not to mention a health risk.” Ferrigno and colleagues tested recursion in both monkeys and humans. Ten U.S. adults recognized recursive symbol sequences on a nonverbal task and quickly applied that knowledge to novel sequences of items. To a lesser but still substantial extent, so did 50 U.S. preschoolers and 37 adult Tsimane’ villagers from Bolivia, who had no schooling in math or reading. Those results imply that an ability to grasp recursion must emerge early in life and doesn’t require formal education. Three rhesus monkeys lacked humans’ ease on the task. But after receiving extra training, two of those monkeys displayed recursive learning, Ferrigno’s group says. One of the two animals ended up, on average, more likely to form novel recursive sequences than about three-quarters of the preschoolers and roughly half of the Bolivian villagers. © Society for Science & the Public 2000–2020.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27332 - Posted: 06.27.2020

By Julia Hollingsworth, CNN (CNN)Laura Molles is so attuned to birds that she can tell where birds of some species are from just by listening to their song. She's not a real-world Dr Doolittle. She's an ecologist in Christchurch, New Zealand, who specializes in a little-known area of science: bird dialects. While some birds are born knowing how to sing innately, many need to be taught how to sing by adults -- just like humans. Those birds can develop regional dialects, meaning their songs sound slightly different depending on where they live. Think Boston and Georgia accents, but for birds. Just as speaking the local language can make it easier for humans to fit in, speaking the local bird dialect can increase a bird's chances of finding a mate. And, more ominously, just as human dialects can sometimes disappear as the world globalizes, bird dialects can be shaped or lost as cities grow. The similarities between human language and bird song aren't lost on Molles -- or on her fellow bird dialect experts. "There are wonderful parallels," said American ornithologist Donald Kroodsma, the author of "Birdsong for the Curious Naturalist: Your Guide to Listening." "Culture, oral traditions -- it's all the same." For centuries, bird song has inspired poets and musicians, but it wasn't until the 1950s that scientists really started paying attention to bird dialects. One of the pioneers of the field was a British-born behaviorist named Peter Marler, who became interested in the subject when he noticed that chaffinches in the United Kingdom sounded different from valley to valley. At first, he transcribed bird songs by hand, according to a profile of him in a Rockefeller University publication. Later, he used a sonagram, which Kroodsma describes on his website as "a musical score for birdsong." ("You really need to see these songs to believe them, our eyes are so much better than our ears," Kroodsma said.) © 2020 Cable News Network.Turner Broadcasting System, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 27303 - Posted: 06.17.2020

Nicola Davis Reading minds has just come a step closer to reality: scientists have developed artificial intelligence that can turn brain activity into text. While the system currently works on neural patterns detected while someone is speaking aloud, experts say it could eventually aid communication for patients who are unable to speak or type, such as those with locked in syndrome. “We are not there yet but we think this could be the basis of a speech prosthesis,” said Dr Joseph Makin, co-author of the research from the University of California, San Francisco. Writing in the journal Nature Neuroscience, Makin and colleagues reveal how they developed their system by recruiting four participants who had electrode arrays implanted in their brain to monitor epileptic seizures. These participants were asked to read aloud from 50 set sentences multiple times, including “Tina Turner is a pop singer”, and “Those thieves stole 30 jewels”. The team tracked their neural activity while they were speaking. This data was then fed into a machine-learning algorithm, a type of artificial intelligence system that converted the brain activity data for each spoken sentence into a string of numbers. To make sure the numbers related only to aspects of speech, the system compared sounds predicted from small chunks of the brain activity data with actual recorded audio. The string of numbers was then fed into a second part of the system which converted it into a sequence of words. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 27155 - Posted: 03.31.2020

By James Gorman There’s something about a really smart dog that makes it seem as if there might be hope for the world. China is in the midst of a frightening disease outbreak and nobody knows how far it will spread. The warming of the planet shows no signs of stopping; it reached a record 70 degrees in Antarctica last week. Not to mention international tensions and domestic politics. But there’s a dog in Norway that knows not only the names of her toys, but also the names of different categories of toys, and she learned all this just by hanging out with her owners and playing her favorite game. So who knows what other good things could be possible? Right? This dog’s name is Whisky. She is a Border collie that lives with her owners and almost 100 toys, so it seems like things are going pretty well for her. Even though I don’t have that many toys myself, I’m happy for her. You can’t be jealous of a dog. Or at least you shouldn’t be. Whisky’s toys have names. Most are dog-appropriate like “the colorful rope” or “the small Frisbee.” However, her owner, Helge O. Svela said on Thursday that since the research was done, her toys have grown in number from 59 to 91, and he has had to give some toys “people” names, like Daisy or Wenger. “That’s for the plushy toys that resemble animals like ducks or elephants (because the names Duck and Elephant were already taken),” he said. During the research, Whisky proved in tests that she knew the names for at least 54 of her 59 toys. That’s not just the claim of a proud owner, and Mr. Svela is quite proud of Whisky, but the finding of Claudia Fugazza, an animal behavior researcher from Eötvös Loránd University in Budapest, who tested her. That alone makes Whisky part of a very select group, although not a champion. You may recall Chaser, another Border collie that knew the names of more than 1,000 objects and also knew words for categories of objects. And there are a few other dogs with shockingly large vocabularies, Dr. Fugazza said, including mixed breeds, and a Yorkie. These canine verbal prodigies are, however, few and far between. “It is really, really unusual, and it is really difficult to teach object names to dogs,” Dr. Fugazza said. © 2020 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27063 - Posted: 02.21.2020

Thomas R. Sawallis and Louis-Jean Boë Sound doesn’t fossilize. Language doesn’t either. Even when writing systems have developed, they’ve represented full-fledged and functional languages. Rather than preserving the first baby steps toward language, they’re fully formed, made up of words, sentences and grammar carried from one person to another by speech sounds, like any of the perhaps 6,000 languages spoken today. So if you believe, as we linguists do, that language is the foundational distinction between humans and other intelligent animals, how can we study its emergence in our ancestors? Happily, researchers do know a lot about language – words, sentences and grammar – and speech – the vocal sounds that carry language to the next person’s ear – in living people. So we should be able to compare language with less complex animal communication. And that’s what we and our colleagues have spent decades investigating: How do apes and monkeys use their mouth and throat to produce the vowel sounds in speech? Spoken language in humans is an intricately woven string of syllables with consonants appended to the syllables’ core vowels, so mastering vowels was a key to speech emergence. We believe that our multidisciplinary findings push back the date for that crucial step in language evolution by as much as 27 million years. The sounds of speech Say “but.” Now say “bet,” “bat,” “bought,” “boot.” The words all begin and end the same. It’s the differences among the vowel sounds that keep them distinct in speech. © 2010–2019, The Conversation US, Inc.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26893 - Posted: 12.12.2019

By Viorica Marian Psycholinguistics is a field at the intersection of psychology and linguistics, and one if its recent discoveries is that the languages we speak influence our eye movements. For example, English speakers who hear candle often look at a candy because the two words share their first syllable. Research with speakers of different languages revealed that bilingual speakers not only look at words that share sounds in one language but also at words that share sounds across their two languages. When Russian-English bilinguals hear the English word marker, they also look at a stamp, because the Russian word for stamp is marka. Even more stunning, speakers of different languages differ in their patterns of eye movements when no language is used at all. In a simple visual search task in which people had to find a previously seen object among other objects, their eyes moved differently depending on what languages they knew. For example, when looking for a clock, English speakers also looked at a cloud. Spanish speakers, on the other hand, when looking for the same clock, looked at a present, because the Spanish names for clock and present—reloj and regalo—overlap at their onset. The story doesn’t end there. Not only do the words we hear activate other, similar-sounding words—and not only do we look at objects whose names share sounds or letters even when no language is heard—but the translations of those names in other languages become activated as well in speakers of more than one language. For example, when Spanish-English bilinguals hear the word duck in English, they also look at a shovel, because the translations of duck and shovel—pato and pala, respectively—overlap in Spanish. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 26875 - Posted: 12.06.2019