Links for Keyword: Language

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 491

by Tanya Lewis, The lip-smacking vocalizations gelada monkeys make are surprisingly similar to human speech, a new study finds. Many nonhuman primates demonstrate lip-smacking behavior, but geladas are the only ones known to make undulating sounds, known as "wobbles," at the same time. (The wobbling sounds a little like a human hum would sound if the volume were being turned on and off rapidly.) The findings show that lip-smacking could have been an important step in the evolution of human speech, researchers say. "Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," Thore Bergman of the University of Michigan in Ann Arbor and author of the study published today (April 8) in the journal Current Biology,said in a statement. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speechlike sounds." NEWS: Lip Smacks of Monkeys Prelude to Speech? Lip-smacking -- rapidly opening and closing the mouth and lips -- shares some of the features of human speech, such as rapid fluctuations in pitch and volume. (See Video of Gelada Lip-Smacking) Bergman first noticed the similarity while studying geladas in the remote mountains of Ethiopia. He would often hear vocalizations that sounded like human voices, but the vocalizations were actually coming from the geladas, he said. He had never come across other primates who made these sounds. But then he read a study on macaques from 2012 revealing how facial movements during lip-smacking were very speech-like, hinting that lip-smacking might be an initial step toward human speech. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 18014 - Posted: 04.10.2013

By Bruce Bower Babies take a critical step toward learning to speak before they can say a word or even babble. By 3 months of age, infants flexibly use three types of sounds — squeals, growls and vowel-like utterances — to express a range of emotions, from positive to neutral to negative, researchers say. Attaching sounds freely to different emotions represents a basic building block of spoken language, say psycholinguist D. Kimbrough Oller of the University of Memphis in Tennessee and his colleagues. Any word or phrase can signal any mental state, depending on context and pronunciation. Infants’ flexible manipulation of sounds to signal how they feel lays the groundwork for word learning, the scientists conclude April 1 in the Proceedings of the National Academy of Sciences. Language evolution took off once this ability emerged in human babies, Oller proposes. Ape and monkey researchers have mainly studied vocalizations that have one meaning, such as distress calls. “At this point, the conservative conclusion is that the human infant at 3 months is already vocally freer than has been demonstrated for any other primate at any age,” Oller says. Oller’s group videotaped infants playing and interacting with their parents in a lab room equipped with toys and furniture. Acoustic analyses identified nearly 7,000 utterances made by infants up to 1 year of age that qualified as laughs, cries, squeals, growls or vowel-like sounds. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17975 - Posted: 04.02.2013

by Lizzie Wade With its complex interweaving of symbols, structure, and meaning, human language stands apart from other forms of animal communication. But where did it come from? A new paper suggests that researchers look to bird songs and monkey calls to understand how human language might have evolved from simpler, preexisting abilities. One reason that human language is so unique is that it has two layers, says Shigeru Miyagawa, a linguist at the Massachusetts Institute of Technology (MIT) in Cambridge. First, there are the words we use, which Miyagawa calls the lexical structure. "Mango," "Amanda," and "eat" are all components of the lexical structure. The rules governing how we put those words together make up the second layer, which Miyagawa calls the expression structure. Take these three sentences: "Amanda eats the mango," "Eat the mango, Amanda," and "Did Amanda eat the mango?" Their lexical structure—the words they use—is essentially identical. What gives the sentences different meanings is the variation in their expression structure, or the different ways those words fit together. The more Miyagawa studied the distinction between lexical structure and expression structure, "the more I started to think, 'Gee, these two systems are really fundamentally different,' " he says. "They almost seem like two different systems that just happen to be put together," perhaps through evolution. One preliminary test of his hypothesis, Miyagawa knew, would be to show that the two systems exist separately in nature. So he started studying the many ways that animals communicate, looking for examples of lexical or expressive structures. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17861 - Posted: 03.02.2013

Regina Nuzzo Despite having brains that are still largely under construction, babies born up to three months before full term can already distinguish between spoken syllables in much the same way that adults do, an imaging study has shown1. Full-term babies — those born after 37 weeks' gestation — display remarkable linguistic sophistication soon after they are born: they recognize their mother’s voice2, can tell apart two languages they’d heard before birth3 and remember short stories read to them while in the womb4. But exactly how these speech-processing abilities develop has been a point of contention. “The question is: what is innate, and what is due to learning immediately after birth?” asks neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France. To answer that, Wallois and his team needed to peek at neural processes already taking place before birth. It is tough to study fetuses, however, so they turned to their same-age peers: babies born 2–3 months premature. At that point, neurons are still migrating to their final destinations; the first connections between upper brain areas are snapping into place; and links have just been forged between the inner ear and cortex. To test these neural pathways, the researchers played soft voices to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain activity using a non-invasive optical imaging technique called functional near-infrared spectroscopy. They were looking for the tell-tale signals of surprise that brains display — for example, when they suddenly hear male and female voices intermingled after hearing a long run of simply female voices. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17852 - Posted: 02.26.2013

By Athena Andreadis Genes are subject to multiple layers of regulation. An early regulatory point is transcription. During this process, regulatory proteins bind to DNA regions (promoters and enhancers) that direct gene expression. These DNA/protein complexes attract the transcription apparatus, which docks next to the complex and proceeds linearly downstream, producing the heteronuclear (hn) RNA that is encoded by the gene linked to the promoter. The hnRNA is then spliced and either becomes structural/regulatory RNA or is translated into protein. Transcription factors are members of large clans that arose from ancestral genes that went through successive duplications and then diverged to fit specific niches. One such family of about fifty members is called FOX. Their DNA binding portion is shaped like a butterfly, which has given this particular motif the monikers of forkhead box or winged helix. The activities of the FOX proteins extend widely in time and region. One of the FOX family members is FOXP2, as notorious as Fox News – except for different reasons: FOXP2 has become entrenched in popular consciousness as “the language gene”. As is the case with all such folklore, there is some truth in this; but as is the case with everything in biology, reality is far more complex. FOXP2, the first gene found to “affect language” (more on this anon), was discovered in 2001 by several converging observations and techniques. The clincher was a large family (code name KE), some of whose members had severe articulation and grammatical deficits with no accompanying sensory or cognitive impairment. The inheritance is autosomal dominant: one copy of the mutated gene is sufficient to confer the trait. When the researchers definitively identified the FOXP2 gene, they found that the version of FOXP2 carried by the KE affected members has a single point mutation that alters an invariant residue in its forkhead domain, thereby influencing the protein’s binding to its DNA targets. © 2013 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 8: Hormones and Sex
Link ID: 17845 - Posted: 02.25.2013

Regina Nuzzo Say the word 'rutabaga', and you have just performed a complex dance with many body parts — lips, tongue, jaw and larynx — in a flash of time. Yet little is known about how the brain coordinates these vocal-tract movements to keep even the clumsiest of us from constantly tripping over our own tongues. A study of unprecedented detail now provides a glimpse into the neural codes that control the production of smooth speech. The results help to clarify how the brain uses muscles to organize sounds and hint at why tongue twisters are so tricky. The work is published today in Nature1. Most neural information about the vocal tract has come from watching people with brain damage or from non-invasive imaging methods, neither of which provide detailed data in time or space2, 3. A team of US researchers has now collected brain-activity data on a scale of millimetres and milliseconds. The researchers recorded brain activity in three people with epilepsy using electrodes that had been implanted in the patients' cortices as part of routine presurgical electrophysiological sessions. They then watched to see what happened when the patients articulated a series of syllables. Sophisticated multi-dimensional statistical procedures enabled the researchers to sift through the huge amounts of data and uncover how basic neural building blocks — patterns of neurons firing in different places over time — combine to form the speech sounds of American English. The patterns for consonants were quite different from those for vowels, even though the parts of speech “use the exact same parts of the vocal tract”, says author Edward Chang, a neuroscientist at the University of California, San Francisco. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17838 - Posted: 02.23.2013

by Sara Reardon Like the musicians in an orchestra, our lips, tongue and vocal cords coordinate with one another to pronounce sounds in speech. A map of the brain regions that conduct the process shows how each is carefully controlled – and how mistakes can slip into our speech. It's long been thought that the brain coordinates our speech by simultaneously controlling the movement of these "articulators". In the 1860s, Alexander Melville Bell proposed that speech could be broken down in this way and designed a writing system for deaf people based on the principle. But brain imaging had not had the resolution to see how neurons control these movements – until now. Using electrodes implanted in the brains of three people to treat their epilepsy, Edward Chang and his colleagues at the University of California mapped brain activity in each volunteer's motor cortex as they pronounced words in American English. The team had expected that each speech sound would be controlled by a unique collection of neurons, and so each would map to a different part of the brain. Instead, they found that the same groups of neurons were activated for all sounds. Each group controls muscles in the tongue, lips, jaw and larynx. The neurons – in the sensorimotor cortex – coordinated with one another to fire in different combinations. Each combination resulted in a very precise placing of the articulators to generate a given sound. Surprisingly, although each articulator can theoretically take on an almost limitless range of shapes, the neurons imposed strict limits on the range of possibilities. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17837 - Posted: 02.23.2013

by Michael Balter Despite recent progress toward sexual equality, it's still a man's world in many ways. But numerous studies show that when it comes to language, girls start off with better skills than boys. Now, scientists studying a gene linked to the evolution of vocalizations and language have for the first time found clear sex differences in its activity in both rodents and humans, with the gene making more of its protein in girls. But some researchers caution against drawing too many conclusions about the gene's role in human and animal communication from this study. Back in 2001, the world of language research was rocked by the discovery that a gene called FOXP2 appeared to be essential for the production of speech. Researchers cautioned that FOXP2 is probably only one of many genes involved in human communication, but later discoveries seemed to underscore its importance. For example, the human version of the protein produced by the gene differs by two amino acids from that of chimpanzees, and seems to have undergone natural selection since the human and chimp lineages split between 5 million and 7 million years ago. (Neandertals were found to have the same version as Homo sapiens, fueling speculation that our evolutionary cousins also had language). In the years since, FOXP2 has been implicated in the vocalizations of other animals, including mice, singing birds, and even bats. During this same time period, a number of studies have confirmed past research suggesting that young girls learn language faster and earlier than boys, producing their first words and sentences sooner and accumulating larger vocabularies faster. But the reasons behind such findings are highly controversial because it is difficult to separate the effects of nature versus nurture, and the differences gradually disappear as children get older. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 8: Hormones and Sex; Chapter 15: Language and Our Divided Brain
Link ID: 17830 - Posted: 02.20.2013

by Virginia Morell Every bottlenose dolphin has its own whistle, a high-pitched, warbly "eeee" that tells the other dolphins that a particular individual is present. Dolphins are excellent vocal mimics, too, able to copy even quirky computer-generated sounds. So, scientists have wondered if dolphins can copy each other's signature whistles—which would be very similar to people saying each others' names. Now, an analysis of whistles recorded from hundreds of wild bottlenose dolphins confirms that they can indeed "name" each other, and suggests why they do so—a discovery that may help researchers translate more of what these brainy marine mammals are squeaking, trilling, and clicking about. "It's a wonderful study, really solid," says Peter Tyack, a marine mammal biologist at the University of St. Andrews in the United Kingdom who was not involved in this project. "Having the ability to learn another individual's name is … not what most animals do. Monkeys have food calls and calls that identify predators, but these are inherited, not learned sounds." The new work "opens the door to understanding the importance of naming." Scientists discovered the dolphins' namelike whistles almost 50 years ago. Since then, researchers have shown that infant dolphins learn their individual whistles from their mothers. A 1986 paper by Tyack did show that a pair of captive male dolphins imitated each others' whistles, and in 2000, Vincent Janik, who is also at St. Andrews, succeeded in recording matching calls among 10 wild dolphins "But without more animals, you couldn't draw a conclusion about what was going on," says Richard Connor, a cetacean biologist at the University of Massachusetts, Dartmouth. Why, after all, would the dolphins need to copy another dolphin's whistle? © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17829 - Posted: 02.20.2013

By Erin Wayman BOSTON — “Birdbrain” may not be much of an insult: Humans and songbirds share genetic changes affecting parts of the brain related to singing and speaking, new research shows. The finding may help scientists better understand how human language evolved, as well as unravel the causes of speech impairments. Neurobiologist Erich Jarvis of Duke University Medical Center in Durham, N.C., and colleagues discovered roughly 80 genes that turn on and off in similar ways in the brains of humans and songbirds such as zebra finches and parakeets. This gene activity, which occurs in brain regions involved in the ability to imitate sounds and to speak and sing, is not present in birds that can’t learn songs or mimic sounds. Jarvis described the work February 15 at the annual meeting of the American Association for the Advancement of Science. Songbirds are good models for language because the birds are born not knowing the songs they will sing as adults. Like human infants learning a specific language, the birds have to observe and imitate others to pick up the tunes they croon. The ancestors of humans and songbirds split some 300 million years ago, suggesting the two groups independently acquired a similar capacity for song. With the new results and other recent research, Jarvis said, “I feel more comfortable that we can link structures in songbird brains to analogous structures in human brains due to convergent evolution.” © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17818 - Posted: 02.18.2013

Philip Ball In Fiji, a star is a kalokalo. For the Pazeh people of Taiwan, it is mintol, and for the Melanau people of Borneo, bitén. All these words are thought to come from the same root. But what was it? An algorithm devised by researchers in Canada and California now offers an answer — in this case, bituqen. The program can reconstruct extinct ‘root’ languages from modern ones, a process that has previously been done painstakingly ‘by hand’ using rules of how linguistic sounds tend to change over time. Statistician Alexandre Bouchard-Côté of the University of British Columbia in Vancouver, Canada, and his co-workers say that by making the reconstruction of ancestral languages much simpler, their method should facilitate the testing of hypotheses about how languages evolve. They report their technique in the Proceedings of the National Academy of Sciences1. Automated language reconstruction has been attempted before, but the authors say that earlier algorithms tended to be rather intractable and prescriptive. Bouchard-Côté and colleagues' method can factor in a large number of languages to improve the quality of reconstruction, and it uses rules that handle possible sound changes in flexible, probabilistic ways. The program requires researchers to input a list of words in each language, together with their meanings, and a phylogenetic ‘language tree’ showing how each language is related to the others. Linguists routinely construct such trees using techniques borrowed from evolutionary biology. © 2013 Nature Publishing Group,

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17786 - Posted: 02.12.2013

By Tia Ghose The identity of a mysterious patient who helped scientists pinpoint the brain region responsible for language has been discovered, researchers report. The finding, detailed in the January issue of the Journal of the History of the Neurosciences, identifies the patient as Louis Leborgne, a French craftsman who battled epilepsy his entire life. In 1840, a wordless patient was admitted to the Bicetre Hospital outside Paris for aphasia, or an inability to speak. He was essentially just kept there, slowly deteriorating. It wasn’t until 1861 that the man, who was known only as “Monsieur Leborgne” and who was nicknamed “Tan” for the only word he could say, came to physician Paul Broca’s ward at the hospital. Leborgne died shortly after the meeting, and Broca performed his autopsy, during which Broca found a lesion in a region of the brain tucked back and up behind the eyes. After doing a detailed examination, Broca concluded that Tan’s aphasia was caused by damage to this region and that the particular brain region controlled speech. That part of the brain was later renamed Broca’s area. At the time, scientists were debating whether different areas of the brain performed separate functions or whether it was an undifferentiated lump that did one task, like the liver, said Marjorie Lorch, a neurolinguist in London who was not involved in the study. © 1996-2013 The Washington Post

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17756 - Posted: 02.05.2013

By Stephen Ornes New babies eat, sleep, cry, poop — and listen. But their eavesdropping begins before birth and may include language lessons, says a new study. Scientists believe such early learning may help babies quickly understand their parents. Christine Moon is a psychologist at Pacific Lutheran University in Tacoma, Wash. She led the new study, to be published in February. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” she told Science News. A prenatal event happens before birth. Scientists have known that about 10 weeks before birth, a fetus can hear sounds outside the womb. Those sounds include the volume and rhythm of a person’s voice. But Moon found evidence that fetuses may also be starting to learn language itself. Moon and her coworkers tested whether newborns could detect differences in vowel sounds. These sounds are the loudest in human speech. Her team reports that newborns responded one way when they heard sounds like those from their parents’ language. And the newborns responded another way when they heard sounds like those from a foreign language. This was true among U.S. and Swedish babies who listened to sounds similar to English vowels and Swedish vowels. These responses show that shortly after birth, babies can group together familiar speech sounds, Moon told Science News. © 2013 Copyright Science News for Kids

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17710 - Posted: 01.26.2013

Some animals are more eloquent than previously thought and have a communication structure similar to the vowel and consonant system of humans, according to new research. Studying the abbreviated call of the mongoose, researchers at the University of Zurich have found they are the first animals to communicate with sound units that are even smaller than syllables and yet still contain information about who is calling and why. Usually, animals can only produce a limited number of distinguishable sounds and calls due to their anatomy. While whale and bird songs are a little more complex than most animal sounds — in that they are repeatedly combined with new arrangements — they don’t pattern themselves after human syllables with their combination of vowels and consonants. Studying wild banded mongooses in Uganda, behavioural biologists discovered that the calls of the animals are structured and contain different information — a sound structure that has some similarities to the vowel and consonant system of human speech. Banded mongooses live in the savannah regions of the Sahara. They are small predators that live in groups of around 20 and are related to the meerkat. The scientists recorded calls of the mongoose and made acoustic analyses of them. The calls, which last between 50 and 150 milliseconds, could be compared to one "syllable," the researchers found. © CBC 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17672 - Posted: 01.12.2013

By Bruce Bower Babies may start to learn their mother tongues even before seeing their mothers’ faces. Newborns react differently to native and foreign vowel sounds, suggesting that language learning begins in the womb, researchers say. Infants tested seven to 75 hours after birth treated spoken variants of a vowel sound in their home language as similar, evidence that newborns regard these sounds as members of a common category, say psychologist Christine Moon of Pacific Lutheran University in Tacoma, Wash., and her colleagues. Newborns deemed different versions of a foreign vowel sound to be dissimilar and unfamiliar, the scientists report in an upcoming Acta Paediatrica. “It seems that there is some prenatal learning of speech sounds, but we do not yet know how much,” Moon says. Fetuses can hear outside sounds by about 10 weeks before birth. Until now, evidence suggested that prenatal learning was restricted to the melody, rhythm and loudness of voices (SN: 12/5/09, p. 14). Earlier investigations established that 6-month-olds group native but not foreign vowel sounds into categories. Moon and colleagues propose that, in the last couple months of gestation, babies monitor at least some vowels — the loudest and most expressive speech sounds — uttered by their mothers. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17662 - Posted: 01.08.2013

By Breanna Draxler Infants are known for their impressive ability to learn language, which most scientists say kicks in somewhere around the six-month mark. But a new study indicates that language recognition may begin even earlier, while the baby is still in the womb. Using a creative means of measurement, researchers found that babies could already recognize their mother tongue by the time they left their mothers’ bodies. The researchers tested American and Swedish newborns between seven hours and three days old. Each baby was given a pacifier hooked up to a computer. When the baby sucked on the pacifier, it triggered the computer to produce a vowel sound—sometimes in English and sometimes in Swedish. The vowel sound was repeated until the baby stopped sucking. When the baby resumed sucking, a new vowel sound would start. The sucking was used as a metric to determine the babies’ interest in each vowel sound. More interest meant more sucks, according to the study soon to be published in Acta Paediatrica. In both countries, babies sucked on the pacifier longer when they heard foreign vowel sounds as compared to those of their mom’s native language. The researchers suggest that this is because the babies already recognize the vowels from their mothers and were keen to learn new ones. Hearing develops in a baby’s brain at around the 30th week of pregnancy, which leaves the last 10 weeks of gestation for babies to put that newfound ability to work. Baby brains are quick to learn, so a better understanding of these mechanisms may help researchers figure out how to improve the learning process for the rest of us.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 13: Memory, Learning, and Development
Link ID: 17648 - Posted: 01.05.2013

By Ben Thomas They called him “Diogenes the Cynic,” because “cynic” meant “dog-like,” and he had a habit of basking naked on the lawn while his fellow philosophers talked on the porch. While they debated the mysteries of the cosmos, Diogenes preferred to soak up some rays – some have called him the Jimmy Buffett of ancient Greece. Anyway, one morning, the great philosopher Plato had a stroke of insight. He caught everyone’s attention, gathered a crowd around him, and announced his deduction: “Man is defined as a hairless, featherless, two-legged animal!” Whereupon Diogenes abruptly leaped up from the lawn, dashed off to the marketplace, and burst back onto the porch carrying a plucked chicken – which he held aloft and shouted, “Behold: I give you… Man!” I’m sure Plato was less than thrilled at this stunt, but the story reminds us that these early philosophers were still hammering out the most basic tenets of the science we now know as taxonomy: The grouping of objects from the world into abstract categories. This technique of chopping up reality wasn’t invented in ancient Greece, though. In fact, as a recent study shows, it’s fundamental to the way our brains work. At the most basic level, we don’t really perceive separate objects at all – we perceive our nervous systems’ responses to a boundless flow of electromagnetic waves and biochemical reactions. Our brains slot certain neural response patterns into sensory pathways we call “sight,” “smell” and so on – but abilities like synesthesia and echolocation show that even the boundaries between our senses can be blurry. © 2012 Scientific American

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 17640 - Posted: 12.27.2012

Philip Ball Learning to read Chinese might seem daunting to Westerners used to an alphabetic script, but brain scans of French and Chinese native speakers show that people harness the same brain centres for reading across cultures. The findings are published today in the Proceedings of the National Academy of Sciences1. Reading involves two neural systems: one that recognizes the shape of the word and a second that asseses the physical movements used to make the marks on a page, says study leader Stanislas Dehaene, a cognitive neuroscientist the National Institute of Health and Medical Research in Gif-sur-Yvette, France. But it has been unclear whether the brain networks responsible for reading are universal or culturally distinct. Previous studies have suggested that alphabetic writing systems (such as French) and logographic ones (such as Chinese, in which single characters represent entire words) writing systems might engage different networks in the brain. To explore this question, Dehaene and his colleagues used functional magnetic resonance imaging to examine brain activity in Chinese and French people while they read their native languages. The researchers found that both Chinese and French people use the visual and gestural systems while reading their native language, but with different emphases that reflect the different demands of each language. © 2012 Nature Publishing Group

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17540 - Posted: 11.27.2012

By Bruce Bower MINNEAPOLIS — Baboons use the order of regularly appearing letter pairs to tell words from nonwords, new evidence suggests. Psychologist Jonathan Grainger of the University of Aix-Marseille reported earlier this year that baboons can learn to tell real four-letter words from nonsense words (SN: 5/5/12, p. 5). But whether these animals detect signature letter combinations that enable their impressive word feats has been tough to demonstrate. Monkeys that previously learned to excel on this task are more likely to mistake nonwords created by reversing two letters of a word they already recognize as real, much as literate people do, Grainger reported November 16 at the Psychonomics Society annual meeting. “Letters played a role in baboons’ word knowledge,” Grainger concluded. “This is a starting point for determining how they discriminate words from nonwords.” Grainger’s team tested the six baboons in their original investigation. Some of the monkeys had previously learned to recognize many more words than others. In new trials, the best word identifiers made more errors than their less successful peers when shown nonwords that differed from known words by a reversed letter combination, such as WSAP instead of WASP and KTIE instead of KITE. Grainger’s team fed the same series of words and nonwords into a computer simulation of the experiment. The computer model best reproduced the animals’ learning curves when endowed with a capacity for tracking letter combinations. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain
Link ID: 17511 - Posted: 11.20.2012

by Douglas Heaven MEANINGS of words can be hard to locate when they are on the tip of your tongue, let alone in the brain. Now, for the first time, patterns of brain activity have been matched with the meanings of specific words. The discovery is a step forward in our attempts to read thoughts from brain activity alone, and could help doctors identify awareness in people with brain damage. Machines can already eavesdrop on our brains to distinguish which words we are listening toMovie Camera, but Joao Correia at Maastricht University in the Netherlands wanted to get beyond the brain's representation of the words themselves and identify the activity that underlies their meaning. Somewhere in the brain, he hypothesised, written and spoken representations of words are integrated and meaning is processed. "We wanted to find the hub," he says. To begin the hunt, Correia and his colleagues used an fMRI scanner to study the brain activity of eight bilingual volunteers as they listened to the names of four animals, bull, horse, shark and duck, spoken in English. The team monitored patterns of neural activity in the left anterior temporal cortex - known to be involved in a range of semantic tasks - and trained an algorithm to identify which word a participant had heard based on the pattern of activity. Since the team wanted to pinpoint activity related to meaning, they picked words that were as similar as possible - all four contain one syllable and belong to the concept of animals. They also chose words that would have been learned at roughly the same time of life and took a similar time for the brain to process. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 17502 - Posted: 11.17.2012