Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 636

By Sayuri Hayakawa, Viorica Marian As Emperor Akihito steps down from the Chrysanthemum Throne in Japan’s first abdication in 200 years, Naruhito officially becomes the new Emperor on May 1, 2019, ushering in a new era called Reiwa (令和; “harmony”). Japan’s tradition of naming eras reflects the ancient belief in the divine spirit of language. Kotodama (言霊; “word spirit”) is the idea that words have an almost magical power to alter physical reality. Through its pervasive impact on society, including its influence on superstitions and social etiquette, traditional poetry and modern pop songs, the word kotodama has, in a way, provided proof of its own concept. For centuries, many cultures have believed in the spiritual force of language. Over time, these ideas have extended from the realm of magic and mythology to become a topic of scientific investigation—ultimately leading to the discovery that language can indeed affect the physical world, for example, by altering our physiology. Our bodies evolve to adapt to our environments, not only over millions of years but also over the days and years of an individual’s life. For instance, off the coast of Thailand, there are children who can “see like dolphins.” Cultural and environmental factors have shaped how these sea nomads of the Moken tribe conduct their daily lives, allowing them to adjust their pupils underwater in a way that most of us cannot. © 2019 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26190 - Posted: 05.01.2019

By Benedict Carey “In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid. Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak. Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.) “It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group. Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking. The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence. © 2019 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26174 - Posted: 04.25.2019

By Sandra G. Boodman “She never cried loudly enough to bother us,” recalled Natalia Weil of her daughter, who was born in 2011. Although Vivienne babbled energetically in her early months, her vocalizing diminished around the time of her first birthday. So did the quality of her voice, which dwindled from normal to raspy to little more than a whisper. Vivienne also was a late talker: She didn’t begin speaking until she was 2. Her suburban Maryland pediatrician initially suspected that a respiratory infection was to blame for the toddler’s hoarseness, and counseled patience. But after the problem persisted, the doctor diagnosed acid reflux and prescribed a drug to treat the voice problems reflux can cause. But Vivienne’s problem turned out to be far more serious — and unusual — than excess stomach acid. The day she learned what was wrong ranks among the worst of Weil’s life. “I had never heard of it,” said Weil, now 33, of her daughter’s diagnosis. “Most people haven’t.” The chronic illness seriously damaged Vivienne Weil’s voice. The 8-year-old has blossomed recently after a new treatment restored it. Her mother says she is eagerly making new friends and has become “a happy, babbly little girl.” (Natalia Weil) At first, Natalia, a statistician, and her husband, Jason, a photographer, were reassured by the pediatrician, who blamed a respiratory infection for their daughter’s voice problem. Her explanation sounded logical: Toddlers get an average of seven or eight colds annually. © 1996-2019 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26075 - Posted: 03.25.2019

Bruce Bower Humankind’s gift of gab is not set in stone, and farming could help to explain why. Over the last 6,000 years or so, farming societies increasingly have substituted processed dairy and grain products for tougher-to-chew game meat and wild plants common in hunter-gatherer diets. Switching to those diets of softer, processed foods altered people’s jaw structure over time, rendering certain sounds like “f” and “v” easier to utter, and changing languages worldwide, scientists contend. People who regularly chew tough foods such as game meat experience a jaw shift that removes a slight overbite from childhood. But individuals who grow up eating softer foods retain that overbite into adulthood, say comparative linguist Damián Blasi of the University of Zurich and his colleagues. Computer simulations suggest that adults with an overbite are better able to produce certain sounds that require touching the lower lip to the upper teeth, the researchers report in the March 15 Science. Linguists classify those speech sounds, found in about half of the world’s languages, as labiodentals. And when Blasi and his team reconstructed language change over time among Indo-European tongues (SN: 11/25/17, p. 16), currently spoken from Iceland to India, the researchers found that the likelihood of using labiodentals in those languages rose substantially over the past 6,000 to 7,000 years. That was especially true when foods such as milled grains and dairy products started appearing (SN: 2/1/03, p. 67). “Labiodental sounds emerged recently in our species, and appear more frequently in populations with long traditions of eating soft foods,” Blasi said at a March 12 telephone news conference. |© Society for Science & the Public 2000 - 2019

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 26037 - Posted: 03.15.2019

Jules Howard It’s a bit garbled but you can definitely hear it in the mobile phone footage. As the chimpanzees arrange their branches into a makeshift ladder and one of them makes its daring escape from its Belfast zoo enclosure, some words ring out loud and clear: “Don’t escape, you bad little gorilla!” a child onlooker shouts from the crowd. And … POP … with that a tiny explosion goes off inside my head. Something knocks me back about this sentence. It’s a “kids-say-the-funniest things” kind of sentence, and in any other situation I’d offer a warm smile and a chuckle of approval. But not this time. This statement has brought out the pedant in me. At this point, you may wonder if I’m capable of fleshing out a 700-word article chastising a toddler for mistakenly referring to a chimpanzee as a gorilla. The good news is that, though I am more than capable of such a callous feat, I don’t intend to write about this child’s naive zoological error. In fact, this piece isn’t really about the (gorgeous, I’m sure) child. It’s about us. You and me, and the words we use. So let’s repeat it. That sentence, I mean. “Don’t escape, you bad little gorilla!” the child shouted. The words I’d like to focus on in this sentence are the words “you” and “bad”. The words “you” and “bad” are nice examples of a simple law of nearly all human languages. They are examples of Zipf’s law of abbreviation, where more commonly used words in a language tend to be shorter. It’s thought that this form of information-shortening allows the transmission of more complex information in a shorter amount of time, and it’s why one in four words you and I write or say is likely to be something of the “you, me, us, the, to” variety. © 2019 Guardian News & Media Limited

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25971 - Posted: 02.18.2019

Hannah Devlin Science correspondent People who stutter are being given electrical brain stimulation in a clinical trial aimed at improving fluency without the need for gruelling speech training. If shown to be effective, the technique – which involves passing an almost imperceptible current through the brain – could be routinely offered by speech therapists. “Stuttering can have serious effects on individuals in terms of their choice of career, what they can get out of education, their earning potential and personal life,” said Prof Kate Watkins, the trial’s principal investigator and a neuroscientist at the University of Oxford. About one in 20 young children go through a phase of stuttering, but most grow out of it. It is estimated that stuttering affects about one in 100 adults, with men about four times more likely to stutter than women. Advertisement In the film The King’s Speech, a speech therapist uses a barrage of techniques to help King George VI, played by Colin Firth, to overcome his stutter, including breathing exercises and speaking without hearing his own voice. The royal client also learns that he can sing without stuttering, a common occurrence in people with the impediment. Speech therapy has advanced since the 1930s, but some of the most effective programmes for improving fluency still require intensive training and involve lengthy periods of using unnatural-sounding speech. © 2019 Guardian News and Media Limited

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25901 - Posted: 01.26.2019

By Catherine L. Caldwell-Harris, Ph.D. Does the language you speak influence how you think? This is the question behind the famous linguistic relativity hypothesis, that the grammar or vocabulary of a language imposes on its speakers a particular way of thinking about the world. The strongest form of the hypothesis is that language determines thought. This version has been rejected by most scholars. A weak form is now thought to be obviously true, which is that if one language has a specific vocabulary item for a concept but another language does not, then speaking about the concept may happen more frequently or more easily. For example, if someone explained to you, an English speaker, the meaning for the German term Schadenfreude, you could recognize the concept, but you may not have used the concept as regularly as a comparable German speaker. Scholars are now interested in whether having a vocabulary item for a concept influences thought in domains far from language, such as visual perception. Consider the case of the "Russian blues." While English has a single word for blue, Russian has two words, goluboy for light blue and siniy for dark blue. These are considered "basic level" terms, like green and purple, since no adjective is needed to distinguish them. Lera Boroditsky and her colleagues displayed two shades of blue on a computer screen and asked Russian speakers to determine, as quickly as possible, whether the two blue colors were different from each other or the same as each other. The fastest discriminations were when the displayed colors were goluboy and siniy, rather than two shades of goluboy or two shades of siniy. The reaction time advantage for lexically distinct blue colors was strongest when the blue hues were perceptually similar.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 25869 - Posted: 01.16.2019

By Kelly Servick For many people who are paralyzed and unable to speak, signals of what they'd like to say hide in their brains. No one has been able to decipher those signals directly. But three research teams recently made progress in turning data from electrodes surgically placed on the brain into computer-generated speech. Using computational models known as neural networks, they reconstructed words and sentences that were, in some cases, intelligible to human listeners. None of the efforts, described in papers in recent months on the preprint server bioRxiv, managed to re-create speech that people had merely imagined. Instead, the researchers monitored parts of the brain as people either read aloud, silently mouthed speech, or listened to recordings. But showing the reconstructed speech is understandable is "definitely exciting," says Stephanie Martin, a neural engineer at the University of Geneva in Switzerland who was not involved in the new projects. People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation. The hurdles are high. "We are trying to work out the pattern of … neurons that turn on and off at different time points, and infer the speech sound," says Nima Mesgarani, a computer scientist at Columbia University. "The mapping from one to the other is not very straightforward." How these signals translate to speech sounds varies from person to person, so computer models must be "trained" on each individual. And the models do best with extremely precise data, which requires opening the skull. © 2018 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 25837 - Posted: 01.03.2019

Emily Hanford Jack Silva didn't know anything about how children learn to read. What he did know is that a lot of students in his district were struggling. Silva is the chief academic officer for Bethlehem, Pa., public schools. In 2015, only 56 percent of third-graders were scoring proficient on the state reading test. That year, he set out to do something about that. "It was really looking yourself in the mirror and saying, 'Which 4 in 10 students don't deserve to learn to read?' " he recalls. Bethlehem is not an outlier. Across the country, millions of kids are struggling. According to the National Assessment of Educational Progress, 32 percent of fourth-graders and 24 percent of eighth-graders aren't reading at a basic level. Fewer than 40 percent are proficient or advanced. One excuse that educators have long offered to explain poor reading performance is poverty. In Bethlehem, a small city in Eastern Pennsylvania that was once a booming steel town, there are plenty of poor families. But there are fancy homes in Bethlehem, too, and when Silva examined the reading scores he saw that many students at the wealthier schools weren't reading very well either. Silva didn't know what to do. To begin with, he didn't know how students in his district were being taught to read. So, he assigned his new director of literacy, Kim Harper, to find out. Harper attended a professional-development day at one of the district's lowest-performing elementary schools. The teachers were talking about how students should attack words in a story. When a child came to a word she didn't know, the teacher would tell her to look at the picture and guess. © 2019 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 25835 - Posted: 01.03.2019

By Ramin Skibba Even when you’re fluent in two languages, it can be a challenge to switch back and forth smoothly between them. It’s common to mangle a split verb in Spanish, use the wrong preposition in English or lose sight of the connection between the beginning and end of a long German sentence. So, does mastering a second language hone our multitasking skills or merely muddle us up? This debate has been pitting linguists and psychologists against one another since the 1920s, when many experts thought that bilingual children were fated to suffer cognitive impairments later in life. But the science has marched on. Psycholinguist Mark Antoniou of Western Sydney University in Australia argues that bilingualism — as he defines it, using at least two languages in your daily life — may benefit our brains, especially as we age. In a recent article, he addressed how best to teach languages to children and laid out evidence that multiple-language use on a regular basis may help delay the onset of Alzheimer’s disease. This conversation has been edited for length and clarity. Q: What are the benefits of bilingualism? A: The first main advantage involves what’s loosely referred to as executive function. This describes skills that allow you to control, direct and manage your attention, as well as your ability to plan. It also helps you ignore irrelevant information and focus on what’s important. Because a bilingual person has mastery of two languages, and the languages are activated automatically and subconsciously, the person is constantly managing the interference of the languages so that she or he doesn’t say the wrong word in the wrong language at the wrong time. The brain areas responsible for that are also used when you’re trying to complete a task while there are distractions. The task could have nothing to do with language; it could be trying to listen to something in a noisy environment or doing some visual task. The muscle memory developed from using two languages also can apply to different skills. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 25767 - Posted: 12.10.2018

By Stephen L. Macknik Sensory information flowing into our brains is inherently ambiguous. We perceive 3D despite having only 2D images on our retinas. It’s an illusion. A sunburn on our face can feel weirdly cool. Illusion. A little perfume smells good but too much is obnoxious. Also an illusion. The brain expends a great deal of effort to disambiguate the meaning of each incoming signal—often using context as a clue—but the neural mechanisms of these abilities remain mysterious. Neuroscientists are a little closer to understanding how to study these mechanisms, thanks to a new study by Kevin Ortego, Michael Pitts, & Enriqueta Canseco-Gonzalez from Pitts's lab at Reed College, presented at the 2018 Society for Neuroscience meeting, on the brain's responses to both visual and language illusions. Illusions are experiences in which the physical reality is different from our perception or expectation. Ambiguous stimuli are important tools to science because the physical reality can legitimately be interpreted in more than one way. Take the classic rabbit-duck illusion, published by the Fliegende Blätter magazine, in Münich, at the end of the 19th century, in which the image can be seen as either a duck or a rabbit. Bistable illusions like these can flip back and forth between competing interpretations, but one cannot see both percepts at the same time. Recent examples of ambiguous illusions show that numerous interpretations are possible. The first place winner of this year's Best Illusion of the Year Contest, created by from Kokichi Sugihara, shows three different ways of perceiving the same object, depending on your specific vantage point. © 2018 Scientific American,

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25744 - Posted: 12.03.2018

By Sandra E. Garcia For years, parents of a Texas boy believed he was mostly nonverbal because of a brain aneurysm he had when he was 10 days old. The boy, Mason Motz, 6, of Katy, Tex., started going to speech therapy when he was 1. In addition to his difficulties speaking, he was given a diagnosis of Sotos syndrome, a disorder that can cause learning disabilities or delayed development, according to the National Institutes of Health. His parents, Dalan and Meredith Motz, became used to how their son communicated. “He could pronounce the beginning of the word but would utter the end of the word,” Ms. Motz said in an interview. “My husband and I were the only ones that could understand him.” That all changed in April 2017, when Dr. Amy Luedemann-Lazar, a pediatric dentist, was performing unrelated procedures on Mason’s teeth. She noticed that his lingual frenulum, the band of tissue under his tongue, was shorter than is typical and was attached close to the tip of his tongue, keeping him from moving it freely. Dr. Luedemann-Lazar ran out to the waiting room to ask the Motzes if she could untie Mason’s tongue using a laser. After a quick Google search, the parents gave her permission to do so. Dr. Luedemann-Lazar completed the procedure in 10 seconds, she said. After his surgery, Mason went home. He had not eaten all day. Ms. Motz heard him say: “I’m hungry. I’m thirsty. Can we watch a movie?” “We’re sitting here thinking, ‘Did he just say that?’” Ms. Motz said. “It sounded like words.” © 2018 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25510 - Posted: 10.01.2018

By Victoria Gill Science correspondent, BBC News Our primate cousins have surprised and impressed scientists in recent years, with revelations about monkeys' tool-using abilities and chimps' development of complex sign language. But researchers are still probing the question: why are we humans the only apes that can talk? That puzzle has now led to an insight into how different non-human primates' brains are "wired" for vocal ability. A new study has compared different primate species' brains. It revealed that primates with wider "vocal repertoires" had more of their brain dedicated to controlling their vocal apparatus. That suggests that our own speaking skills may have evolved as our brains gradually rewired to control that apparatus, rather than purely because we're smarter than non-human apes. Humans and other primates have very similar vocal anatomy - in terms of their tongues and larynx. That's the physical machinery in the throat which allows us to turn air into sound. So, as lead researcher Dr Jacob Dunn from Anglia Ruskin University in Cambridge explained, it remains a mystery that only human primates can actually talk. "That's likely due to differences in the brain," Dr Dunn told BBC News, "but there haven't been comparative studies across species." So how do our primate brains differ? That comparison is exactly what Dr Dunn and his colleague Prof Jeroen Smaers set out to do. They ranked 34 different primate species based on their vocal abilities - the number of distinct calls they are known to make in the wild. They then examined the brain of each species, using information from existing, preserved brains that had been kept for research. © 2018 BBC

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25314 - Posted: 08.10.2018

Tina Hesman Saey Humans’ gift of gab probably wasn’t the evolutionary boon that scientists once thought. There’s no evidence that FOXP2, sometimes called “the language gene,” gave humans such a big evolutionary advantage that it was quickly adopted across the species, what scientists call a selective sweep. That finding, reported online August 2 in Cell, follows years of debate about the role of FOXP2 in human evolution. In 2002, the gene became famous when researchers thought they had found evidence that a tweak in FOXP2 spread quickly to all humans — and only humans — about 200,000 years ago. That tweak swapped two amino acids in the human version of the gene for ones different than in other animals’ versions of the gene. FOXP2 is involved in vocal learning in songbirds, and people with mutations in the gene have speech and language problems. Many researchers initially thought that the amino acid swap was what enabled humans to speak. Speech would have given humans a leg up on competition from Neandertals and other ancient hominids. That view helped make FOXP2 a textbook example of selective sweeps. Some researchers even suggested that FOXP2 was the gene that defines humans, until it became clear that the gene did not allow humans to settle the world and replace other hominids, says archeaogeneticist Johannes Krause at the Max Planck Institute for the Science of Human History in Jena, Germany, who was not involved in the study. “It was not the one gene to rule them all.” |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25293 - Posted: 08.04.2018

Matthew Warren The evolution of human language was once thought to have hinged on changes to a single gene that were so beneficial that they raced through ancient human populations. But an analysis now suggests that this gene, FOXP2, did not undergo changes in Homo sapiens’ recent history after all — and that previous findings might simply have been false signals. “The situation’s a lot more complicated than the very clean story that has been making it into textbooks all this time,” says Elizabeth Atkinson, a population geneticist at the Broad Institute of Harvard and MIT in Cambridge, Massachusetts, and a co-author of the paper, which was published on 2 August in Cell1. Originally discovered in a family who had a history of profound speech and language disorders, FOXP2 was the first gene found to be involved in language production2. Later research touted its importance to the evolution of human language. A key 2002 paper found that humans carry two mutations to FOXP2 not found in any other primates3. When the researchers looked at genetic variation surrounding these mutations, they found the signature of a ‘selective sweep’ — in which a beneficial mutation quickly becomes common across a population. This change to FOXP2 seemed to have happened in the past 200,000 years, the team reported in Nature. The paper has been cited hundreds of times in the scientific literature. © 2018 Springer Nature Limited.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25292 - Posted: 08.04.2018

By Michael Erard , Catherine Matacic If you want a no-fuss, no-muss pet, consider the Bengalese finch. Dubbed the society finch for its friendliness, breeders often use it to foster unrelated chicks. But put the piebald songbird next to its wild ancestor, the white-rumped munia, and you can both see and hear the differences: The aggressive munia tends to be darker and whistles a scratchy, off-kilter tune, whereas the pet finch warbles a melody so complex that even nonmusicians may wonder how this caged bird learned to sing. All this makes the domesticated and wild birds a perfect natural experiment to help explore an upstart proposal about human evolution: that the building blocks of language are a byproduct of brain alterations that arose when natural selection favored cooperation among early humans. According to this hypothesis, skills such as learning complex calls, combining vocalizations, and simply knowing when another creature wants to communicate all came about as a consequence of pro-social traits like kindness. If so, domesticated animals, which are bred to be good-natured, might exhibit such communication skills too. The idea is rooted in a much older one: that humans tamed themselves. This self-domestication hypothesis, which got its start with Charles Darwin, says that when early humans started to prefer cooperative friends and mates to aggressive ones, they essentially domesticated themselves. Along with tameness came evolutionary changes seen in other domesticated mammals—smoother brows, shorter faces, and more feminized features—thanks in part to lower levels of circulating androgens (such as testosterone) that tend to promote aggression. © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 8: Hormones and Sex
Link ID: 25289 - Posted: 08.03.2018

Sara Kiley Watson Read these sentences aloud: I never said she stole my money. I never said she stole my money. I never said she stole my money. Emphasizing any one of the words over the others makes the string of words mean something completely different. "Pitch change" — the vocal quality we use to emphasize words — is a crucial part of human communication, whether spoken or sung. Recent research from Dr. Edward Chang's lab at the University of California, San Francisco's epilepsy center has narrowed down which part of the brain controls our ability to regulate the pitch of our voices when we speak or sing— the part that enables us to differentiate between the utterances "Let's eat, Grandma" and "Let's eat Grandma." Scientists already knew, more or less, what parts of the brain are engaged in speech, says Chang, a professor of neurological surgery. What the new research has allowed, he says, is a better understanding of the neural code of pitch and its variations — how information about pitch is represented in the brain. Chang's team was able to study these neural codes with the help of a particular group of study volunteers: epilepsy patients. Chang treats people whose seizures can't be medically controlled; these patients need surgery to stop the misfiring neurons. He puts electrodes in each patient's brain to help guide the scalpel during their surgery. © 2018 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25265 - Posted: 07.28.2018

by Erin Blakemore What if you wanted to speak but couldn’t string together recognizable words? What if someone spoke to you but you couldn’t understand what they were saying? These situations aren’t hypothetical for the more than 1 million Americans with aphasia, which affects the ability to understand and speak with others. Aphasia occurs in people who have had strokes, traumatic brain injuries or other brain damage. Some victims have a scrambled vocabulary or are unable to express themselves; others find it hard to make sense of the words they read or hear. The disorder doesn’t reduce intelligence, only a person’s ability to communicate. And although there is no definitive cure, it can be treated. Many people make significant recoveries from aphasia after a stroke, for example. July is Aphasia Awareness Month, a fine time to learn more about the disorder. The TED-Ed series offers a lesson on aphasia, complete with an engaging video that describes the condition, its causes and its treatment, along with a quiz, discussion questions and other resources. Created by Susan Wortman-Jutt, a speech-language pathologist who treats aphasia, it’s a good introduction to the disorder and how damage to the brain’s language centers can hamper an individual’s ability to communicate. Another resource is the National Aphasia Association. Its website, aphasia.­org, contains information about the disorder and links to support and treatment options. Aphasia can have lasting effects, but there is hope for people whose brains are injured. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25180 - Posted: 07.07.2018

Philip Lieberman In the 1960s, researchers at Yale University’s Haskins Laboratories attempted to produce a machine that would read printed text aloud to blind people. Alvin Liberman and his colleagues figured the solution was to isolate the “phonemes,” the ostensible beads-on-a-string equivalent to movable type that linguists thought existed in the acoustic speech signal. Linguists had assumed (and some still do) that phonemes were roughly equivalent to the letters of the alphabet and that they could be recombined to form different words. However, when the Haskins group snipped segments from tape recordings of words or sentences spoken by radio announcers or trained phoneticians, and tried to link them together to form new words, the researchers found that the results were incomprehensible.1 That’s because, as most speech scientists agree, there is no such thing as pure phonemes (though some linguists still cling to the idea). Discrete phonemes do not exist as such in the speech signal, and instead are always blended together in words. Even “stop consonants,” such as [b], [p], [t], and [g], don’t exist as isolated entities; it is impossible to utter a stop consonant without also producing a vowel before or after it. As such, the consonant [t] in the spoken word tea, for example, sounds quite different from that in the word to. To produce the vowel sound in to, the speakers’ lips are protruded and narrowed, while they are retracted and open for the vowel sound in tea, yielding different acoustic representations of the initial consonant. Moreover, when the Haskins researchers counted the number of putative phonemes that would be transmitted each second during normal conversations, the rate exceeded that which can be interpreted by the human auditory system—the synthesized phrases would have become an incomprehensible buzz. © 1986 - 2018 The Scientist.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25176 - Posted: 07.06.2018

Geoffrey Pullum One area outshines all others in provoking crazy talk about language in the media, and that is the idea of language acquisition in nonhuman species. On June 19 came the sad news of the death of Koko, the western lowland gorilla cared for by Francine “Penny” Patterson at a sanctuary in the Santa Cruz Mountains. Many obituaries appeared, and the press indulged as never before in sentimental nonsense about talking with the animals. Credulous repetition of Koko’s mythical prowess in sign language was everywhere. Jeffrey Kluger’s essay in Time was unusually extreme in its blend of emotion, illogicality, wishful thinking, and outright falsehood. Koko, he tells us, once made a sequence of hand signs that Patterson interpreted as “you key there me cookie“; and Kluger calls it “impressive … for the clarity of its meaning.” Would you call it clear and meaningful if it were uttered by an adult human? As always with the most salient cases of purported ape signing, Koko was flailing around producing signs at random in a purely situation-bound bid to obtain food from her trainer, who was in control of a locked treat cabinet. The fragmentary and anecdotal evidence about Koko’s much-prompted and much-rewarded sign usage was never sufficient to show that the gorilla even understood the meanings of individual signs — that key denotes a device intended to open locks, that the word cookie is not appropriately applied to muffins, and so on. © 2018 The Chronicle of Higher Education

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25155 - Posted: 06.29.2018