Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 632

Jules Howard It’s a bit garbled but you can definitely hear it in the mobile phone footage. As the chimpanzees arrange their branches into a makeshift ladder and one of them makes its daring escape from its Belfast zoo enclosure, some words ring out loud and clear: “Don’t escape, you bad little gorilla!” a child onlooker shouts from the crowd. And … POP … with that a tiny explosion goes off inside my head. Something knocks me back about this sentence. It’s a “kids-say-the-funniest things” kind of sentence, and in any other situation I’d offer a warm smile and a chuckle of approval. But not this time. This statement has brought out the pedant in me. At this point, you may wonder if I’m capable of fleshing out a 700-word article chastising a toddler for mistakenly referring to a chimpanzee as a gorilla. The good news is that, though I am more than capable of such a callous feat, I don’t intend to write about this child’s naive zoological error. In fact, this piece isn’t really about the (gorgeous, I’m sure) child. It’s about us. You and me, and the words we use. So let’s repeat it. That sentence, I mean. “Don’t escape, you bad little gorilla!” the child shouted. The words I’d like to focus on in this sentence are the words “you” and “bad”. The words “you” and “bad” are nice examples of a simple law of nearly all human languages. They are examples of Zipf’s law of abbreviation, where more commonly used words in a language tend to be shorter. It’s thought that this form of information-shortening allows the transmission of more complex information in a shorter amount of time, and it’s why one in four words you and I write or say is likely to be something of the “you, me, us, the, to” variety. © 2019 Guardian News & Media Limited

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25971 - Posted: 02.18.2019

Hannah Devlin Science correspondent People who stutter are being given electrical brain stimulation in a clinical trial aimed at improving fluency without the need for gruelling speech training. If shown to be effective, the technique – which involves passing an almost imperceptible current through the brain – could be routinely offered by speech therapists. “Stuttering can have serious effects on individuals in terms of their choice of career, what they can get out of education, their earning potential and personal life,” said Prof Kate Watkins, the trial’s principal investigator and a neuroscientist at the University of Oxford. About one in 20 young children go through a phase of stuttering, but most grow out of it. It is estimated that stuttering affects about one in 100 adults, with men about four times more likely to stutter than women. Advertisement In the film The King’s Speech, a speech therapist uses a barrage of techniques to help King George VI, played by Colin Firth, to overcome his stutter, including breathing exercises and speaking without hearing his own voice. The royal client also learns that he can sing without stuttering, a common occurrence in people with the impediment. Speech therapy has advanced since the 1930s, but some of the most effective programmes for improving fluency still require intensive training and involve lengthy periods of using unnatural-sounding speech. © 2019 Guardian News and Media Limited

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25901 - Posted: 01.26.2019

By Catherine L. Caldwell-Harris, Ph.D. Does the language you speak influence how you think? This is the question behind the famous linguistic relativity hypothesis, that the grammar or vocabulary of a language imposes on its speakers a particular way of thinking about the world. The strongest form of the hypothesis is that language determines thought. This version has been rejected by most scholars. A weak form is now thought to be obviously true, which is that if one language has a specific vocabulary item for a concept but another language does not, then speaking about the concept may happen more frequently or more easily. For example, if someone explained to you, an English speaker, the meaning for the German term Schadenfreude, you could recognize the concept, but you may not have used the concept as regularly as a comparable German speaker. Scholars are now interested in whether having a vocabulary item for a concept influences thought in domains far from language, such as visual perception. Consider the case of the "Russian blues." While English has a single word for blue, Russian has two words, goluboy for light blue and siniy for dark blue. These are considered "basic level" terms, like green and purple, since no adjective is needed to distinguish them. Lera Boroditsky and her colleagues displayed two shades of blue on a computer screen and asked Russian speakers to determine, as quickly as possible, whether the two blue colors were different from each other or the same as each other. The fastest discriminations were when the displayed colors were goluboy and siniy, rather than two shades of goluboy or two shades of siniy. The reaction time advantage for lexically distinct blue colors was strongest when the blue hues were perceptually similar.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 14: Attention and Consciousness
Link ID: 25869 - Posted: 01.16.2019

By Kelly Servick For many people who are paralyzed and unable to speak, signals of what they'd like to say hide in their brains. No one has been able to decipher those signals directly. But three research teams recently made progress in turning data from electrodes surgically placed on the brain into computer-generated speech. Using computational models known as neural networks, they reconstructed words and sentences that were, in some cases, intelligible to human listeners. None of the efforts, described in papers in recent months on the preprint server bioRxiv, managed to re-create speech that people had merely imagined. Instead, the researchers monitored parts of the brain as people either read aloud, silently mouthed speech, or listened to recordings. But showing the reconstructed speech is understandable is "definitely exciting," says Stephanie Martin, a neural engineer at the University of Geneva in Switzerland who was not involved in the new projects. People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation. The hurdles are high. "We are trying to work out the pattern of … neurons that turn on and off at different time points, and infer the speech sound," says Nima Mesgarani, a computer scientist at Columbia University. "The mapping from one to the other is not very straightforward." How these signals translate to speech sounds varies from person to person, so computer models must be "trained" on each individual. And the models do best with extremely precise data, which requires opening the skull. © 2018 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 25837 - Posted: 01.03.2019

Emily Hanford Jack Silva didn't know anything about how children learn to read. What he did know is that a lot of students in his district were struggling. Silva is the chief academic officer for Bethlehem, Pa., public schools. In 2015, only 56 percent of third-graders were scoring proficient on the state reading test. That year, he set out to do something about that. "It was really looking yourself in the mirror and saying, 'Which 4 in 10 students don't deserve to learn to read?' " he recalls. Bethlehem is not an outlier. Across the country, millions of kids are struggling. According to the National Assessment of Educational Progress, 32 percent of fourth-graders and 24 percent of eighth-graders aren't reading at a basic level. Fewer than 40 percent are proficient or advanced. One excuse that educators have long offered to explain poor reading performance is poverty. In Bethlehem, a small city in Eastern Pennsylvania that was once a booming steel town, there are plenty of poor families. But there are fancy homes in Bethlehem, too, and when Silva examined the reading scores he saw that many students at the wealthier schools weren't reading very well either. Silva didn't know what to do. To begin with, he didn't know how students in his district were being taught to read. So, he assigned his new director of literacy, Kim Harper, to find out. Harper attended a professional-development day at one of the district's lowest-performing elementary schools. The teachers were talking about how students should attack words in a story. When a child came to a word she didn't know, the teacher would tell her to look at the picture and guess. © 2019 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 25835 - Posted: 01.03.2019

By Ramin Skibba Even when you’re fluent in two languages, it can be a challenge to switch back and forth smoothly between them. It’s common to mangle a split verb in Spanish, use the wrong preposition in English or lose sight of the connection between the beginning and end of a long German sentence. So, does mastering a second language hone our multitasking skills or merely muddle us up? This debate has been pitting linguists and psychologists against one another since the 1920s, when many experts thought that bilingual children were fated to suffer cognitive impairments later in life. But the science has marched on. Psycholinguist Mark Antoniou of Western Sydney University in Australia argues that bilingualism — as he defines it, using at least two languages in your daily life — may benefit our brains, especially as we age. In a recent article, he addressed how best to teach languages to children and laid out evidence that multiple-language use on a regular basis may help delay the onset of Alzheimer’s disease. This conversation has been edited for length and clarity. Q: What are the benefits of bilingualism? A: The first main advantage involves what’s loosely referred to as executive function. This describes skills that allow you to control, direct and manage your attention, as well as your ability to plan. It also helps you ignore irrelevant information and focus on what’s important. Because a bilingual person has mastery of two languages, and the languages are activated automatically and subconsciously, the person is constantly managing the interference of the languages so that she or he doesn’t say the wrong word in the wrong language at the wrong time. The brain areas responsible for that are also used when you’re trying to complete a task while there are distractions. The task could have nothing to do with language; it could be trying to listen to something in a noisy environment or doing some visual task. The muscle memory developed from using two languages also can apply to different skills. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 25767 - Posted: 12.10.2018

By Stephen L. Macknik Sensory information flowing into our brains is inherently ambiguous. We perceive 3D despite having only 2D images on our retinas. It’s an illusion. A sunburn on our face can feel weirdly cool. Illusion. A little perfume smells good but too much is obnoxious. Also an illusion. The brain expends a great deal of effort to disambiguate the meaning of each incoming signal—often using context as a clue—but the neural mechanisms of these abilities remain mysterious. Neuroscientists are a little closer to understanding how to study these mechanisms, thanks to a new study by Kevin Ortego, Michael Pitts, & Enriqueta Canseco-Gonzalez from Pitts's lab at Reed College, presented at the 2018 Society for Neuroscience meeting, on the brain's responses to both visual and language illusions. Illusions are experiences in which the physical reality is different from our perception or expectation. Ambiguous stimuli are important tools to science because the physical reality can legitimately be interpreted in more than one way. Take the classic rabbit-duck illusion, published by the Fliegende Blätter magazine, in Münich, at the end of the 19th century, in which the image can be seen as either a duck or a rabbit. Bistable illusions like these can flip back and forth between competing interpretations, but one cannot see both percepts at the same time. Recent examples of ambiguous illusions show that numerous interpretations are possible. The first place winner of this year's Best Illusion of the Year Contest, created by from Kokichi Sugihara, shows three different ways of perceiving the same object, depending on your specific vantage point. © 2018 Scientific American,

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25744 - Posted: 12.03.2018

By Sandra E. Garcia For years, parents of a Texas boy believed he was mostly nonverbal because of a brain aneurysm he had when he was 10 days old. The boy, Mason Motz, 6, of Katy, Tex., started going to speech therapy when he was 1. In addition to his difficulties speaking, he was given a diagnosis of Sotos syndrome, a disorder that can cause learning disabilities or delayed development, according to the National Institutes of Health. His parents, Dalan and Meredith Motz, became used to how their son communicated. “He could pronounce the beginning of the word but would utter the end of the word,” Ms. Motz said in an interview. “My husband and I were the only ones that could understand him.” That all changed in April 2017, when Dr. Amy Luedemann-Lazar, a pediatric dentist, was performing unrelated procedures on Mason’s teeth. She noticed that his lingual frenulum, the band of tissue under his tongue, was shorter than is typical and was attached close to the tip of his tongue, keeping him from moving it freely. Dr. Luedemann-Lazar ran out to the waiting room to ask the Motzes if she could untie Mason’s tongue using a laser. After a quick Google search, the parents gave her permission to do so. Dr. Luedemann-Lazar completed the procedure in 10 seconds, she said. After his surgery, Mason went home. He had not eaten all day. Ms. Motz heard him say: “I’m hungry. I’m thirsty. Can we watch a movie?” “We’re sitting here thinking, ‘Did he just say that?’” Ms. Motz said. “It sounded like words.” © 2018 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25510 - Posted: 10.01.2018

By Victoria Gill Science correspondent, BBC News Our primate cousins have surprised and impressed scientists in recent years, with revelations about monkeys' tool-using abilities and chimps' development of complex sign language. But researchers are still probing the question: why are we humans the only apes that can talk? That puzzle has now led to an insight into how different non-human primates' brains are "wired" for vocal ability. A new study has compared different primate species' brains. It revealed that primates with wider "vocal repertoires" had more of their brain dedicated to controlling their vocal apparatus. That suggests that our own speaking skills may have evolved as our brains gradually rewired to control that apparatus, rather than purely because we're smarter than non-human apes. Humans and other primates have very similar vocal anatomy - in terms of their tongues and larynx. That's the physical machinery in the throat which allows us to turn air into sound. So, as lead researcher Dr Jacob Dunn from Anglia Ruskin University in Cambridge explained, it remains a mystery that only human primates can actually talk. "That's likely due to differences in the brain," Dr Dunn told BBC News, "but there haven't been comparative studies across species." So how do our primate brains differ? That comparison is exactly what Dr Dunn and his colleague Prof Jeroen Smaers set out to do. They ranked 34 different primate species based on their vocal abilities - the number of distinct calls they are known to make in the wild. They then examined the brain of each species, using information from existing, preserved brains that had been kept for research. © 2018 BBC

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25314 - Posted: 08.10.2018

Tina Hesman Saey Humans’ gift of gab probably wasn’t the evolutionary boon that scientists once thought. There’s no evidence that FOXP2, sometimes called “the language gene,” gave humans such a big evolutionary advantage that it was quickly adopted across the species, what scientists call a selective sweep. That finding, reported online August 2 in Cell, follows years of debate about the role of FOXP2 in human evolution. In 2002, the gene became famous when researchers thought they had found evidence that a tweak in FOXP2 spread quickly to all humans — and only humans — about 200,000 years ago. That tweak swapped two amino acids in the human version of the gene for ones different than in other animals’ versions of the gene. FOXP2 is involved in vocal learning in songbirds, and people with mutations in the gene have speech and language problems. Many researchers initially thought that the amino acid swap was what enabled humans to speak. Speech would have given humans a leg up on competition from Neandertals and other ancient hominids. That view helped make FOXP2 a textbook example of selective sweeps. Some researchers even suggested that FOXP2 was the gene that defines humans, until it became clear that the gene did not allow humans to settle the world and replace other hominids, says archeaogeneticist Johannes Krause at the Max Planck Institute for the Science of Human History in Jena, Germany, who was not involved in the study. “It was not the one gene to rule them all.” |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25293 - Posted: 08.04.2018

Matthew Warren The evolution of human language was once thought to have hinged on changes to a single gene that were so beneficial that they raced through ancient human populations. But an analysis now suggests that this gene, FOXP2, did not undergo changes in Homo sapiens’ recent history after all — and that previous findings might simply have been false signals. “The situation’s a lot more complicated than the very clean story that has been making it into textbooks all this time,” says Elizabeth Atkinson, a population geneticist at the Broad Institute of Harvard and MIT in Cambridge, Massachusetts, and a co-author of the paper, which was published on 2 August in Cell1. Originally discovered in a family who had a history of profound speech and language disorders, FOXP2 was the first gene found to be involved in language production2. Later research touted its importance to the evolution of human language. A key 2002 paper found that humans carry two mutations to FOXP2 not found in any other primates3. When the researchers looked at genetic variation surrounding these mutations, they found the signature of a ‘selective sweep’ — in which a beneficial mutation quickly becomes common across a population. This change to FOXP2 seemed to have happened in the past 200,000 years, the team reported in Nature. The paper has been cited hundreds of times in the scientific literature. © 2018 Springer Nature Limited.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25292 - Posted: 08.04.2018

By Michael Erard , Catherine Matacic If you want a no-fuss, no-muss pet, consider the Bengalese finch. Dubbed the society finch for its friendliness, breeders often use it to foster unrelated chicks. But put the piebald songbird next to its wild ancestor, the white-rumped munia, and you can both see and hear the differences: The aggressive munia tends to be darker and whistles a scratchy, off-kilter tune, whereas the pet finch warbles a melody so complex that even nonmusicians may wonder how this caged bird learned to sing. All this makes the domesticated and wild birds a perfect natural experiment to help explore an upstart proposal about human evolution: that the building blocks of language are a byproduct of brain alterations that arose when natural selection favored cooperation among early humans. According to this hypothesis, skills such as learning complex calls, combining vocalizations, and simply knowing when another creature wants to communicate all came about as a consequence of pro-social traits like kindness. If so, domesticated animals, which are bred to be good-natured, might exhibit such communication skills too. The idea is rooted in a much older one: that humans tamed themselves. This self-domestication hypothesis, which got its start with Charles Darwin, says that when early humans started to prefer cooperative friends and mates to aggressive ones, they essentially domesticated themselves. Along with tameness came evolutionary changes seen in other domesticated mammals—smoother brows, shorter faces, and more feminized features—thanks in part to lower levels of circulating androgens (such as testosterone) that tend to promote aggression. © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 8: Hormones and Sex
Link ID: 25289 - Posted: 08.03.2018

Sara Kiley Watson Read these sentences aloud: I never said she stole my money. I never said she stole my money. I never said she stole my money. Emphasizing any one of the words over the others makes the string of words mean something completely different. "Pitch change" — the vocal quality we use to emphasize words — is a crucial part of human communication, whether spoken or sung. Recent research from Dr. Edward Chang's lab at the University of California, San Francisco's epilepsy center has narrowed down which part of the brain controls our ability to regulate the pitch of our voices when we speak or sing— the part that enables us to differentiate between the utterances "Let's eat, Grandma" and "Let's eat Grandma." Scientists already knew, more or less, what parts of the brain are engaged in speech, says Chang, a professor of neurological surgery. What the new research has allowed, he says, is a better understanding of the neural code of pitch and its variations — how information about pitch is represented in the brain. Chang's team was able to study these neural codes with the help of a particular group of study volunteers: epilepsy patients. Chang treats people whose seizures can't be medically controlled; these patients need surgery to stop the misfiring neurons. He puts electrodes in each patient's brain to help guide the scalpel during their surgery. © 2018 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25265 - Posted: 07.28.2018

by Erin Blakemore What if you wanted to speak but couldn’t string together recognizable words? What if someone spoke to you but you couldn’t understand what they were saying? These situations aren’t hypothetical for the more than 1 million Americans with aphasia, which affects the ability to understand and speak with others. Aphasia occurs in people who have had strokes, traumatic brain injuries or other brain damage. Some victims have a scrambled vocabulary or are unable to express themselves; others find it hard to make sense of the words they read or hear. The disorder doesn’t reduce intelligence, only a person’s ability to communicate. And although there is no definitive cure, it can be treated. Many people make significant recoveries from aphasia after a stroke, for example. July is Aphasia Awareness Month, a fine time to learn more about the disorder. The TED-Ed series offers a lesson on aphasia, complete with an engaging video that describes the condition, its causes and its treatment, along with a quiz, discussion questions and other resources. Created by Susan Wortman-Jutt, a speech-language pathologist who treats aphasia, it’s a good introduction to the disorder and how damage to the brain’s language centers can hamper an individual’s ability to communicate. Another resource is the National Aphasia Association. Its website, aphasia.­org, contains information about the disorder and links to support and treatment options. Aphasia can have lasting effects, but there is hope for people whose brains are injured. © 1996-2018 The Washington Post

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25180 - Posted: 07.07.2018

Philip Lieberman In the 1960s, researchers at Yale University’s Haskins Laboratories attempted to produce a machine that would read printed text aloud to blind people. Alvin Liberman and his colleagues figured the solution was to isolate the “phonemes,” the ostensible beads-on-a-string equivalent to movable type that linguists thought existed in the acoustic speech signal. Linguists had assumed (and some still do) that phonemes were roughly equivalent to the letters of the alphabet and that they could be recombined to form different words. However, when the Haskins group snipped segments from tape recordings of words or sentences spoken by radio announcers or trained phoneticians, and tried to link them together to form new words, the researchers found that the results were incomprehensible.1 That’s because, as most speech scientists agree, there is no such thing as pure phonemes (though some linguists still cling to the idea). Discrete phonemes do not exist as such in the speech signal, and instead are always blended together in words. Even “stop consonants,” such as [b], [p], [t], and [g], don’t exist as isolated entities; it is impossible to utter a stop consonant without also producing a vowel before or after it. As such, the consonant [t] in the spoken word tea, for example, sounds quite different from that in the word to. To produce the vowel sound in to, the speakers’ lips are protruded and narrowed, while they are retracted and open for the vowel sound in tea, yielding different acoustic representations of the initial consonant. Moreover, when the Haskins researchers counted the number of putative phonemes that would be transmitted each second during normal conversations, the rate exceeded that which can be interpreted by the human auditory system—the synthesized phrases would have become an incomprehensible buzz. © 1986 - 2018 The Scientist.

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25176 - Posted: 07.06.2018

Geoffrey Pullum One area outshines all others in provoking crazy talk about language in the media, and that is the idea of language acquisition in nonhuman species. On June 19 came the sad news of the death of Koko, the western lowland gorilla cared for by Francine “Penny” Patterson at a sanctuary in the Santa Cruz Mountains. Many obituaries appeared, and the press indulged as never before in sentimental nonsense about talking with the animals. Credulous repetition of Koko’s mythical prowess in sign language was everywhere. Jeffrey Kluger’s essay in Time was unusually extreme in its blend of emotion, illogicality, wishful thinking, and outright falsehood. Koko, he tells us, once made a sequence of hand signs that Patterson interpreted as “you key there me cookie“; and Kluger calls it “impressive … for the clarity of its meaning.” Would you call it clear and meaningful if it were uttered by an adult human? As always with the most salient cases of purported ape signing, Koko was flailing around producing signs at random in a purely situation-bound bid to obtain food from her trainer, who was in control of a locked treat cabinet. The fragmentary and anecdotal evidence about Koko’s much-prompted and much-rewarded sign usage was never sufficient to show that the gorilla even understood the meanings of individual signs — that key denotes a device intended to open locks, that the word cookie is not appropriately applied to muffins, and so on. © 2018 The Chronicle of Higher Education

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25155 - Posted: 06.29.2018

Davide Castelvecchi The 2016 film Arrival starred Amy Adams as a linguistics professor who was drafted in to communicate with aliens.Credit: Moviestore Coll./Alamy Sheri Wells-Jensen is fascinated by languages no one has ever heard — those that might be spoken by aliens. Last week, the linguist co-hosted a day-long workshop on this field of research, which sits at the boundary of astrobiology and linguistics. The meeting, at a conference of the US National Space Society in Los Angeles, California, was organized by Messaging Extraterrestrial Intelligence (METI). METI, which is funded by private donors, organizes the transmission of messages to other star systems. The effort is complementary to SETI (Search for Extraterrestrial Intelligence), which aims to detect messages from alien civilizations. METI targets star systems relatively close to the Sun that are known to host Earth-sized planets in their ‘habitable zone’ — where the conditions are right for liquid water to exist — using large radar dishes. Last year, it directed a radio message, which attempted to explain musical language, towards a nearby exoplanet system. The message started from basic arithmetic (encoded in binary as two radio wavelengths) and introduced increasingly complex concepts such as duration and frequency. Nature spoke to Wells-Jensen, who is a member of METI’s board of directors, about last week’s meeting and the field of alien linguistics. Was this the first workshop of this kind ever? We’ve done two workshops on communicating with aliens before, but this is the first one specifically about linguistics. If we do make contact, we should try and figure out what would be a reasonable first step in trying to communicate. Right now, we are trying to put our heads together and figure out what’s likely and what could be done after that. © 2018 Macmillan Publishers Limited,

Related chapters from BN8e: Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 25053 - Posted: 06.04.2018

Anya Kamenetz "I want The Three Bears!" These days parents, caregivers and teachers have lots of options when it comes to fulfilling that request. You can read a picture book, put on a cartoon, play an audiobook, or even ask Alexa. A newly published study gives some insight into what may be happening inside young children's brains in each of those situations. And, says lead author Dr. John Hutton, there is an apparent "Goldilocks effect" — some kinds of storytelling may be "too cold" for children, while others are "too hot." And, of course, some are "just right." Hutton is a researcher and pediatrician at Cincinnati Children's Hospital with a special interest in "emergent literacy" — the process of learning to read. For the study, 27 children around age 4 went into an FMRI machine. They were presented with stories in three conditions: audio only; the illustrated pages of a storybook with an audio voiceover; and an animated cartoon. All three versions came from the Web site of Canadian author Robert Munsch. While the children paid attention to the stories, the MRI, the machine scanned for activation within certain brain networks, and connectivity between the networks. "We went into it with an idea in mind of what brain networks were likely to be influenced by the story," Hutton explains. One was language. One was visual perception. The third is called visual imagery. The fourth was the default mode network, which Hutton calls, "the seat of the soul, internal reflection — how something matters to you." The default mode network includes regions of the brain that appear more active when someone is not actively concentrating on a designated mental task involving the outside world. In terms of Hutton's "Goldilocks effect," here's what the researchers found: © 2018 npr

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 25016 - Posted: 05.24.2018

Bruce Bower Language learning isn’t kid stuff anymore. In fact, it never was, a provocative new study concludes. A crucial period for learning the rules and structure of a language lasts up to around age 17 or 18, say psychologist Joshua Hartshorne of MIT and colleagues. Previous research had suggested that grammar-learning ability flourished in early childhood before hitting a dead end around age 5. If that were true, people who move to another country and try to learn a second language after the first few years of life should have a hard time achieving the fluency of native speakers. But that’s not so, Hartshorne’s team reports online May 2 in Cognition. In an online sample of unprecedented size, people who started learning English as a second language in an English-speaking country by age 10 to 12 ultimately mastered the new tongue as well as folks who had learned English and another language simultaneously from birth, the researchers say. Both groups, however, fell somewhat short of the grammatical fluency displayed by English-only speakers. After ages 10 to 12, new-to-English learners reached lower levels of fluency than those who started learning English at younger ages because time ran out when their grammar-absorbing ability plummeted starting around age 17. In another surprise, modest amounts of English learning among native and second-language speakers continued until around age 30, the investigators found, although most learning happened in the first 10 to 20 years of life. |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 24967 - Posted: 05.12.2018

By Dana G. Smith The older you get the more difficult it is to learn to speak French like a Parisian. But no one knows exactly what the cutoff point is—at what age it becomes harder, for instance, to pick up noun-verb agreements in a new language. In one of the largest linguistics studies ever conducted—a viral internet survey that drew two thirds of a million respondents—researchers from three Boston-based universities showed children are proficient at learning a second language up until the age of 18, roughly 10 years later than earlier estimates. But the study also showed that it is best to start by age 10 if you want to achieve the grammatical fluency of a native speaker. To parse this problem, the research team, which included psychologist Steven Pinker, collected data on a person’s current age, language proficiency and time studying English. The investigators calculated they needed more than half a million people to make a fair estimate of when the “critical period” for achieving the highest levels of grammatical fluency ends. So they turned to the world’s greatest experimental subject pool: the internet. They created a short online grammar quiz called Which English? that tested noun–verb agreement, pronouns, prepositions and relative clauses, among other linguistic elements. From the responses, an algorithm predicted the tester’s native language and which dialect of English (that is, Canadian, Irish, Australian) they spoke. For example, some of the questions included phrases a Chicagoan would deem grammatically incorrect but a Manitoban would think is perfectly acceptable English. © 2018 Scientific American

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 13: Memory, Learning, and Development
Link ID: 24938 - Posted: 05.05.2018