Links for Keyword: Language

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 689

By Virginia Morell Say “sit!” to your dog, and—if he’s a good boy—he’ll likely plant his rump on the floor. But would he respond correctly if the word were spoken by a stranger, or someone with a thick accent? A new study shows he will, suggesting dogs perceive spoken words in a sophisticated way long thought unique to humans. “It’s a very solid and interesting finding,” says Tecumseh Fitch, an expert on vertebrate communication at the University of Vienna who was not involved in the research. The way we pronounce words changes depending on our sex, age, and even social rank. Some as-yet-unknown neural mechanism enables us to filter out differences in accent and pronunciation, helping us understand spoken words regardless of the speaker. Animals like zebra finches, chinchillas, and macaques can be trained to do this, but until now only humans were shown to do this spontaneously. In the new study, Holly Root-Gutteridge, a cognitive biologist at the University of Sussex in Brighton, U.K., and her colleagues ran a test that others have used to show dogs can recognize other dogs from their barks. The researchers filmed 42 dogs of different breeds as they sat with their owners near an audio speaker that played six monosyllabic, noncommand words with similar sounds, such as “had,” “hid,” and “who’d.” The words were spoken—not by the dog’s owner—but by several strangers, men and women of different ages and with different accents. © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26866 - Posted: 12.04.2019

Jon Hamilton When we hear a sentence, or a line of poetry, our brains automatically transform the stream of sound into a sequence of syllables. But scientists haven't been sure exactly how the brain does this. Now, researchers from the University of California, San Francisco, think they've figured it out. The key is detecting a rapid increase in volume that occurs at the beginning of a vowel sound, they report Wednesday in Science Advances. "Our brain is basically listening for these time points and responding whenever they occur," says Yulia Oganian, a postdoctoral scholar at UCSF. The finding challenges a popular idea that the brain monitors speech volume continuously to detect syllables. Instead, it suggests that the brain periodically "samples" spoken language looking for specific changes in volume. The finding is "in line" with a computer model designed to simulate the way a human brain decodes speech, says Oded Ghitza, a research professor in the biomedical engineering department at Boston University who was not involved in the study. Detecting each rapid increase in volume associated with a syllable gives the brain, or a computer, an efficient way to deal with the "stream" of sound that is human speech, Ghitza says. And syllables, he adds, are "the basic Lego blocks of language." Oganian's study focused on a part of the brain called the superior temporal gyrus. "It's an area that has been known for about 150 years to be really important for speech comprehension," Oganian says. "So we knew if you can find syllables somewhere, it should be there." The team studied a dozen patients preparing for brain surgery to treat severe epilepsy. As part of the preparation, surgeons had placed electrodes over the area of the brain involved in speech. © 2019 npr

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 26841 - Posted: 11.21.2019

By Nicholas Bakalar People who never learned to read and write may be at increased risk for dementia. Researchers studied 983 adults 65 and older with four or fewer years of schooling. Ninety percent were immigrants from the Dominican Republic, where there were limited opportunities for schooling. Many had learned to read outside of school, but 237 could not read or write. Over an average of three and a half years, the participants periodically took tests of memory, language and reasoning. Illiterate men and women were 2.65 times as likely as the literate to have dementia at the start of the study, and twice as likely to have developed it by the end. Illiterate people, however, did not show a faster rate of decline in skills than those who could read and write. The analysis, in Neurology, controlled for sex, hypertension, diabetes, heart disease and other dementia risk factors. “Early life exposures and early life social opportunities have an impact on later life,” said the senior author, Jennifer J. Manly, a professor of neuropsychology at Columbia. “That’s the underlying theme here. There’s a life course of exposures and engagements and opportunities that lead to a healthy brain later in life.” “We would like to expand this research to other populations,” she added. “Our hypothesis is that this is relevant and consistent across populations of illiterate adults.” © 2019 The New York Times Company

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 15: Language and Lateralization
Link ID: 26838 - Posted: 11.21.2019

By Natalia Sylvester My parents refused to let my sister and me forget how to speak Spanish by pretending they didn’t understand when we spoke English. Spanish was the only language we were allowed to speak in our one-bedroom apartment in Miami in the late 1980s. We both graduated from English as a second language lessons in record time as kindergartners and first graders, and we longed to play and talk and live in English as if it were a shiny new toy. “No te entiendo,” my mother would say, shaking her head and shrugging in feigned confusion anytime we slipped into English. My sister and I would let out exasperated sighs at having to repeat ourselves in Spanish, only to be interrupted by a correction of our grammar and vocabulary after every other word. One day you’ll thank me, my mother retorted. That day has come to pass 30 years later in ordinary places like Goodwill, a Walmart parking lot, a Costco Tire Center. I’m most thankful that I can speak Spanish because it has allowed me to help others. There was the young mother who wanted to know whether she could leave a cumbersome diaper bin aside at the register at Goodwill while she shopped. The cashier shook her head dismissively and said she didn’t understand. It wasn’t difficult to read the woman’s gestures — she was struggling to push her baby’s carriage while lugging the large box around the store. Even after I told the cashier what the woman was saying, her irritation was palpable. The air of judgment is one I’ve come to recognize: How dare this woman not speak English, how dare this other woman speak both English and Spanish. It was a small moment, but it speaks to how easy it would have been for the cashier to ignore a young Latina mother struggling to care for her child had there not been someone around to interpret. “I don’t understand,” she kept saying, though the mother’s gestures transcended language. I choose not to understand is what she really meant. © 2019 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26629 - Posted: 09.21.2019

By Catherine Matacic Italians are some of the fastest speakers on the planet, chattering at up to nine syllables per second. Many Germans, on the other hand, are slow enunciators, delivering five to six syllables in the same amount of time. Yet in any given minute, Italians and Germans convey roughly the same amount of information, according to a new study. Indeed, no matter how fast or slowly languages are spoken, they tend to transmit information at about the same rate: 39 bits per second, about twice the speed of Morse code. “This is pretty solid stuff,” says Bart de Boer, an evolutionary linguist who studies speech production at the Free University of Brussels, but was not involved in the work. Language lovers have long suspected that information-heavy languages—those that pack more information about tense, gender, and speaker into smaller units, for example—move slowly to make up for their density of information, he says, whereas information-light languages such as Italian can gallop along at a much faster pace. But until now, no one had the data to prove it. Scientists started with written texts from 17 languages, including English, Italian, Japanese, and Vietnamese. They calculated the information density of each language in bits—the same unit that describes how quickly your cellphone, laptop, or computer modem transmits information. They found that Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable. © 2019 American Association for the Advancement of Science

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26576 - Posted: 09.05.2019

By Carolyn Wilke In learning to read, squiggles and lines transform into letters or characters that carry meaning and conjure sounds. A trio of cognitive neuroscientists has now mapped where that journey plays out inside the brain. As readers associate symbols with pronunciation and part of a word, a pecking order of brain areas processes the information, the researchers report August 19 in the Proceedings of the National Academy of Sciences. The finding unveils some of the mystery behind how the brain learns to tie visual cues with language (SN Online: 4/27/16). “We didn’t evolve to read,” says Jo Taylor, who is now at University College London but worked on the study while at Aston University in Birmingham, England. “So we don’t [start with] a bit of the brain that does reading.” Taylor — along with Kathy Rastle at Royal Holloway University of London in Egham and Matthew Davis at the University of Cambridge — zoomed in on a region at the back and bottom of the brain, called the ventral occipitotemporal cortex, that is associated with reading. Over two weeks, the scientists taught made-up words written in two unfamiliar, archaic scripts to 24 native English–speaking adults. The words were assigned the meanings of common nouns, such as lemon or truck. Then the researchers used functional MRI scans to track which tiny chunks of brain in that region became active when participants were shown the words learned in training. © Society for Science & the Public 2000–2019

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26548 - Posted: 08.27.2019

Researchers believe that stuttering — a potentially lifelong and debilitating speech disorder — stems from problems with the circuits in the brain that control speech, but precisely how and where these problems occur is unknown. Using a mouse model of stuttering, scientists report that a loss of cells in the brain called astrocytes are associated with stuttering. The mice had been engineered with a human gene mutation previously linked to stuttering. The study (link is external), which appeared online in the Proceedings of the National Academy of Sciences, offers insights into the neurological deficits associated with stuttering. The loss of astrocytes, a supporting cell in the brain, was most prominent in the corpus callosum, a part of the brain that bridges the two hemispheres. Previous imaging studies have identified differences in the brains of people who stutter compared to those who do not. Furthermore, some of these studies in people have revealed structural and functional problems in the same brain region as the new mouse study. The study was led by Dennis Drayna, Ph.D., of the Section on Genetics of Communication Disorders, at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health. Researchers at the Washington University School of Medicine in St. Louis and from NIH’s National Institute of Biomedical Imaging and Bioengineering, and National Institute of Mental Health collaborated on the research. “The identification of genetic, molecular, and cellular changes that underlie stuttering has led us to understand persistent stuttering as a brain disorder,” said Andrew Griffith, M.D., Ph.D., NIDCD scientific director. “Perhaps even more importantly, pinpointing the brain region and cells that are involved opens opportunities for novel interventions for stuttering — and possibly other speech disorders.”

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 4: Development of the Brain
Link ID: 26513 - Posted: 08.19.2019

By Derrick Bryson Taylor Many owners struggle to teach their dogs to sit, fetch or even bark on command, but John W. Pilley, a professor emeritus of psychology at Wofford College, taught his Border collie to understand more than 1,000 nouns, a feat that earned them both worldwide recognition. For some time, Dr. Pilley had been conducting his own experiment teaching dogs the names of objects and was inspired by Border collie farmers to rethink his methods. Dr. Pilley was given a black-and-white Border collie as a gift by his wife Sally. For three years, Dr. Pilley trained the dog, named Chaser, four to five hours a day: He showed her an object, said its name up to 40 times, then hid it and asked her to find it. He used 800 cloth animal toys, 116 balls, 26 Frisbees and an assortment of plastic items to ultimately teach Chaser 1,022 nouns. In 2013, Dr. Pilley published his findings that explained that Chaser was taught to understand sentences containing a prepositional object, verb and direct object. Chaser died on Tuesday at 15. She had been living with Dr. Pilley’s wife and their daughter Robin in Spartanburg. Dr. Pilley died last year at 89. Another daughter, Pilley Bianchi, said on Saturday that Chaser had been in declining health in recent weeks. “The vet really determined that she died of natural causes,” Ms. Bianchi said. “She went down very quickly.” Ms. Bianchi, who helped her father train Chaser, said the dog had been undergoing acupuncture for arthritis but had no other known illnesses. © 2019 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26458 - Posted: 07.29.2019

By Ryan Dalton In the dystopian world of George Orwell’s Nineteen Eighty-Four, the government of Oceania aims to achieve thought control through the restriction of language. As explained by the character ‘Syme’, a lexicologist who is working to replace the English language with the greatly-simplified ‘Newspeak’: “Don’t you see that the whole aim of Newspeak is to narrow the range of thought?” While Syme’s own reflections were short-lived, the merits of his argument were not: the words and structure of a language can influence the thoughts and decisions of its speakers. This holds for English and Greek, Inuktitut and Newspeak. It also may hold for the ‘neural code’, the basic electrical vocabulary of the neurons in the brain. Neural codes, like spoken languages, are tasked with conveying all manner of information. Some of this information is immediately required for survival; other information has a less acute use. To accommodate these different needs, a balance is struck between the richness of information being transferred and the speed or reliability with which it is transferred. Where the balance is set depends on context. In the example of language, the mention of the movie Jaws at a dinner party might result in a ranging and patient—if disconcerting—discussion around the emotional impact of the film. In contrast, the observation of a dorsal fin breaking through the surf at the beach would probably elicit a single word, screamed by many beachgoers at once: “shark!” In one context, the language used has been optimized for richness; in the other, for speed and reliability. © 2019 Scientific American

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Language and Lateralization
Link ID: 26383 - Posted: 07.03.2019

/ By Dan Falk Suppose I give you the name of a body part, and ask you to list its main uses: I say legs, you say walking and running; I say ears, you say hearing. And if I say the brain? Well, that’s a no-brainer (so to speak); obviously the brain is for thinking. Of course, it does a bunch of other things, too; after all, when the brain ceases to function, we die — but clearly it’s where cognition happens. Tversky argues that gesturing is more than just a by-product of speech: it literally helps us think. Or is it? No one would argue that the brain isn’t vital for thinking — but quite a few 21st-century psychologists and cognitive scientists believe that the body, as well as the brain, is needed for thinking to actually happen. And it’s not just that the brain needs a body to keep it alive (that much is obvious), but rather, that the brain and the body somehow work together: it’s the combination of brain-plus-body that creates the mental world. The latest version of this proposition comes from Barbara Tversky, a professor emerita of psychology at Stanford University who also teaches at Columbia. Her new book, “Mind in Motion: How Action Shapes Thought,” is an extended argument for the interplay of mind and body in enabling cognition. She draws on many different lines of evidence, including the way we talk about movement and space, the way we use maps, the way we talk about and use numbers, and the way we gesture. Copyright 2019 Undark

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 5: The Sensorimotor System
Link ID: 26364 - Posted: 06.28.2019

By Darcey Steinke The J in “juice” was the first letter-sound, according to my mother, that I repeated in staccato, going off like a skipping record. This was when I was 3, before my stutter was stigmatized as shameful. In those earliest years my relationship to language was uncomplicated: I assumed my voice was more like a bird’s or a squirrel’s than my playmates’. This seemed exciting. I imagined, unlike fluent children, I might be able to converse with wild creatures, I’d learn their secrets, tell them mine and forge friendships based on interspecies intimacy. School put an end to this fantasy. Throughout elementary school I stuttered every time a teacher called on me and whenever I was asked to read out loud. In the third grade the humiliation of being forced to read a few paragraphs about stewardesses in the Weekly Reader still burns. The ST is hard for stutterers. What would have taken a fluent child five minutes took me an excruciating 25. It was around this time that I started separating the alphabet into good letters, V as well as M, and bad letters, S, F and T, plus the terrible vowel sounds, open and mysterious and nearly impossible to wrangle. Each letter had a degree of difficulty that changed depending upon its position in the sentence. Much later when I read that Nabokov as a child assigned colors to letters, it made sense to me that the hard G looked like “vulcanized rubber” and the R, “a sooty rag being ripped.” My beloved V, in the Nabokovian system, was a jewel-like “rose quartz.” My mother, knowing that kids ridiculed me — she once found a book, “The Mystery of the Stuttering Parrot,” that had been tossed onto our lawn — wanted to eradicate my speech impediment. She encouraged me to practice the strategies taught to me by a string of therapists, bouncing off an easy sound to a harder one and unclenching my throat, trying to slide out of a stammer. When I was 13 she got me a scholarship to a famous speech therapy program at a college near our house in Virginia. © 2019 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26313 - Posted: 06.10.2019

By Malin Fezehai Muazzez Kocek, 46, is considered one of the best whistlers in Kuşköy, a village tucked away in the picturesque Pontic Mountains in Turkey’s northern Giresun province. Her whistle can be heard over the area’s vast tea fields and hazelnut orchards, several miles farther than a person’s voice. When President Recep Tayyip Erdogan of Turkey visited Kuşköy in 2012, she greeted him and proudly whistled, “Welcome to our village!” She uses kuş dili, or “bird language,” which transforms the full Turkish vocabulary into varied-pitch frequencies and melodic lines. For hundreds of years, this whistled form of communication has been a critical for the farming community in the region, allowing complex conversations over long distances and facilitating animal herding. Today, there are about 10,000 people in the larger region that speak it, but because of the increased use of cellphones, which remove the need for a voice to carry over great distances, that number is dwindling. The language is at risk of dying out. Of Ms. Kocek’s three children, only her middle daughter, Kader, 14, knows bird language. Ms. Kocek began learning bird language at six years old, by working in the fields with her father. She has tried to pass the tradition on to her three daughters; even though they understand it, only her middle child, Kader Kocek, 14, knows how to speak, and can whistle Turkey’s national anthem. Turkey is one of a handful of countries in the world where whistling languages exist. Similar ways of communicating are known to have been used in the Canary Islands, Greece, Mexico, and Mozambique. They fascinate researchers and linguistic experts, because they suggest that the brain structures that process language are not as fixed as once thought. There is a long-held belief that language interpretation occurs mostly in the left hemisphere, and melody, rhythm and singing on the right. But a study that biopsychologist Onur Güntürkün conducted in Kuşköy, suggests that whistling language is processed in both hemispheres. © 2019 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26279 - Posted: 05.30.2019

Laura Sanders Advantages of speaking a second language are obvious: easier logistics when traveling, wider access to great literature and, of course, more people to talk with. Some studies have also pointed to the idea that polyglots have stronger executive functioning skills, brain abilities such as switching between tasks and ignoring distractions. But a large study of bilingual children in the U.S. finds scant evidence of those extra bilingual brain benefits. Bilingual children performed no better in tests measuring such thinking skills than children who knew just one language, researchers report May 20 in Nature Human Behaviour. To look for a relationship between bilingualism and executive function, researchers relied on a survey of U.S. adolescents called the ABCD study. From data collected at 21 research sites across the country, researchers identified 4,524 kids ages 9 and 10. Of these children, 1,740 spoke English and a second language (mostly Spanish, though 40 second languages were represented). On three tests that measured executive function, such as the ability to ignore distractions or quickly switch between tasks with different rules, the bilingual children performed similarly to children who spoke only English, the researchers found. “We really looked,” says study coauthor Anthony Dick, a developmental cognitive neuroscientist at Florida International University in Miami said. “We didn’t find anything.” |© Society for Science & the Public 2000 - 2019.

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26265 - Posted: 05.24.2019

By Sayuri Hayakawa, Viorica Marian As Emperor Akihito steps down from the Chrysanthemum Throne in Japan’s first abdication in 200 years, Naruhito officially becomes the new Emperor on May 1, 2019, ushering in a new era called Reiwa (令和; “harmony”). Japan’s tradition of naming eras reflects the ancient belief in the divine spirit of language. Kotodama (言霊; “word spirit”) is the idea that words have an almost magical power to alter physical reality. Through its pervasive impact on society, including its influence on superstitions and social etiquette, traditional poetry and modern pop songs, the word kotodama has, in a way, provided proof of its own concept. For centuries, many cultures have believed in the spiritual force of language. Over time, these ideas have extended from the realm of magic and mythology to become a topic of scientific investigation—ultimately leading to the discovery that language can indeed affect the physical world, for example, by altering our physiology. Our bodies evolve to adapt to our environments, not only over millions of years but also over the days and years of an individual’s life. For instance, off the coast of Thailand, there are children who can “see like dolphins.” Cultural and environmental factors have shaped how these sea nomads of the Moken tribe conduct their daily lives, allowing them to adjust their pupils underwater in a way that most of us cannot. © 2019 Scientific American

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26190 - Posted: 05.01.2019

By Benedict Carey “In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid. Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak. Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.) “It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group. Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking. The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence. © 2019 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26174 - Posted: 04.25.2019

By Sandra G. Boodman “She never cried loudly enough to bother us,” recalled Natalia Weil of her daughter, who was born in 2011. Although Vivienne babbled energetically in her early months, her vocalizing diminished around the time of her first birthday. So did the quality of her voice, which dwindled from normal to raspy to little more than a whisper. Vivienne also was a late talker: She didn’t begin speaking until she was 2. Her suburban Maryland pediatrician initially suspected that a respiratory infection was to blame for the toddler’s hoarseness, and counseled patience. But after the problem persisted, the doctor diagnosed acid reflux and prescribed a drug to treat the voice problems reflux can cause. But Vivienne’s problem turned out to be far more serious — and unusual — than excess stomach acid. The day she learned what was wrong ranks among the worst of Weil’s life. “I had never heard of it,” said Weil, now 33, of her daughter’s diagnosis. “Most people haven’t.” The chronic illness seriously damaged Vivienne Weil’s voice. The 8-year-old has blossomed recently after a new treatment restored it. Her mother says she is eagerly making new friends and has become “a happy, babbly little girl.” (Natalia Weil) At first, Natalia, a statistician, and her husband, Jason, a photographer, were reassured by the pediatrician, who blamed a respiratory infection for their daughter’s voice problem. Her explanation sounded logical: Toddlers get an average of seven or eight colds annually. © 1996-2019 The Washington Post

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26075 - Posted: 03.25.2019

Bruce Bower Humankind’s gift of gab is not set in stone, and farming could help to explain why. Over the last 6,000 years or so, farming societies increasingly have substituted processed dairy and grain products for tougher-to-chew game meat and wild plants common in hunter-gatherer diets. Switching to those diets of softer, processed foods altered people’s jaw structure over time, rendering certain sounds like “f” and “v” easier to utter, and changing languages worldwide, scientists contend. People who regularly chew tough foods such as game meat experience a jaw shift that removes a slight overbite from childhood. But individuals who grow up eating softer foods retain that overbite into adulthood, say comparative linguist Damián Blasi of the University of Zurich and his colleagues. Computer simulations suggest that adults with an overbite are better able to produce certain sounds that require touching the lower lip to the upper teeth, the researchers report in the March 15 Science. Linguists classify those speech sounds, found in about half of the world’s languages, as labiodentals. And when Blasi and his team reconstructed language change over time among Indo-European tongues (SN: 11/25/17, p. 16), currently spoken from Iceland to India, the researchers found that the likelihood of using labiodentals in those languages rose substantially over the past 6,000 to 7,000 years. That was especially true when foods such as milled grains and dairy products started appearing (SN: 2/1/03, p. 67). “Labiodental sounds emerged recently in our species, and appear more frequently in populations with long traditions of eating soft foods,” Blasi said at a March 12 telephone news conference. |© Society for Science & the Public 2000 - 2019

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 26037 - Posted: 03.15.2019

Jules Howard It’s a bit garbled but you can definitely hear it in the mobile phone footage. As the chimpanzees arrange their branches into a makeshift ladder and one of them makes its daring escape from its Belfast zoo enclosure, some words ring out loud and clear: “Don’t escape, you bad little gorilla!” a child onlooker shouts from the crowd. And … POP … with that a tiny explosion goes off inside my head. Something knocks me back about this sentence. It’s a “kids-say-the-funniest things” kind of sentence, and in any other situation I’d offer a warm smile and a chuckle of approval. But not this time. This statement has brought out the pedant in me. At this point, you may wonder if I’m capable of fleshing out a 700-word article chastising a toddler for mistakenly referring to a chimpanzee as a gorilla. The good news is that, though I am more than capable of such a callous feat, I don’t intend to write about this child’s naive zoological error. In fact, this piece isn’t really about the (gorgeous, I’m sure) child. It’s about us. You and me, and the words we use. So let’s repeat it. That sentence, I mean. “Don’t escape, you bad little gorilla!” the child shouted. The words I’d like to focus on in this sentence are the words “you” and “bad”. The words “you” and “bad” are nice examples of a simple law of nearly all human languages. They are examples of Zipf’s law of abbreviation, where more commonly used words in a language tend to be shorter. It’s thought that this form of information-shortening allows the transmission of more complex information in a shorter amount of time, and it’s why one in four words you and I write or say is likely to be something of the “you, me, us, the, to” variety. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 25971 - Posted: 02.18.2019

Hannah Devlin Science correspondent People who stutter are being given electrical brain stimulation in a clinical trial aimed at improving fluency without the need for gruelling speech training. If shown to be effective, the technique – which involves passing an almost imperceptible current through the brain – could be routinely offered by speech therapists. “Stuttering can have serious effects on individuals in terms of their choice of career, what they can get out of education, their earning potential and personal life,” said Prof Kate Watkins, the trial’s principal investigator and a neuroscientist at the University of Oxford. About one in 20 young children go through a phase of stuttering, but most grow out of it. It is estimated that stuttering affects about one in 100 adults, with men about four times more likely to stutter than women. Advertisement In the film The King’s Speech, a speech therapist uses a barrage of techniques to help King George VI, played by Colin Firth, to overcome his stutter, including breathing exercises and speaking without hearing his own voice. The royal client also learns that he can sing without stuttering, a common occurrence in people with the impediment. Speech therapy has advanced since the 1930s, but some of the most effective programmes for improving fluency still require intensive training and involve lengthy periods of using unnatural-sounding speech. © 2019 Guardian News and Media Limited

Related chapters from BN: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Language and Lateralization
Link ID: 25901 - Posted: 01.26.2019

By Catherine L. Caldwell-Harris, Ph.D. Does the language you speak influence how you think? This is the question behind the famous linguistic relativity hypothesis, that the grammar or vocabulary of a language imposes on its speakers a particular way of thinking about the world. The strongest form of the hypothesis is that language determines thought. This version has been rejected by most scholars. A weak form is now thought to be obviously true, which is that if one language has a specific vocabulary item for a concept but another language does not, then speaking about the concept may happen more frequently or more easily. For example, if someone explained to you, an English speaker, the meaning for the German term Schadenfreude, you could recognize the concept, but you may not have used the concept as regularly as a comparable German speaker. Scholars are now interested in whether having a vocabulary item for a concept influences thought in domains far from language, such as visual perception. Consider the case of the "Russian blues." While English has a single word for blue, Russian has two words, goluboy for light blue and siniy for dark blue. These are considered "basic level" terms, like green and purple, since no adjective is needed to distinguish them. Lera Boroditsky and her colleagues displayed two shades of blue on a computer screen and asked Russian speakers to determine, as quickly as possible, whether the two blue colors were different from each other or the same as each other. The fastest discriminations were when the displayed colors were goluboy and siniy, rather than two shades of goluboy or two shades of siniy. The reaction time advantage for lexically distinct blue colors was strongest when the blue hues were perceptually similar.

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 25869 - Posted: 01.16.2019