Chapter 15. Language and Our Divided Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Tara Haelle A long overdue and growing body of research on concussions is providing today’s young athletes, parents and coaches with more information about identifying and treating head injuries—but not all of that research is reliable. For instance, one new study on youth concussions offers valuable information about recovery time, whereas potentially flawed conclusions in a second new study illustrate one of the biggest challenges in studying youth concussions—missed diagnoses. An estimated 170,000 children go to the emergency room for concussions annually, but this number does not capture the millions treated outside of hospitals by athletic trainers, family doctors or specialists. The sports with the most reported concussions are boys’ football and girls’ soccer, but bicycling, basketball and playground activities are also among the most common ways children sustain these head injuries. Symptoms can include dizziness, fatigue, nausea, headache and memory or concentration problems. After a concussion is identified, the primary treatment is physical and cognitive rest, although the amount of rest needed is not always medically clear. The first study, published June 10 in Pediatrics, found that recovery takes up to two or three times longer if a child has sustained one or more concussions within the past year, further supporting reasons “to be cautious about returning young athletes to sports after a concussion,” says lead author Matthew A. Eisenberg of Boston Children’s Hospital. Eisenberg’s study notes. © 2013 Scientific American
Keyword: Brain Injury/Concussion
Link ID: 18252 - Posted: 06.10.2013
By ROBERT J. ZATORRE and VALORIE N. SALIMPOOR MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it. In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest. So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value? The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers. More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain. When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine. © 2013 The New York Times Company
Link ID: 18251 - Posted: 06.10.2013
by Tia Ghose, LiveScience Ape and human infants at comparable stages of development use similar gestures, such as pointing or lifting their arms to be picked up, new research suggests. Chimpanzee, bonobo and human babies rely mainly on gestures at about a year old, and gradually develop symbolic language (words, for human babies; and signs, for apes) as they get older. The findings suggest that “gesture plays an important role in the evolution of language, because it preceded language use across the species," said study co-author Kristen Gillespie-Lynch, a developmental psychologist at the College of Staten Island in New York. The idea that language arose from gesture and a primitive sign language has a long history. French philosopher Étienne Bonnot de Condillac proposed the idea in 1746, and other scientists have noted that walking on two legs, which frees up the hands for gesturing, occurred earlier in human evolution than changes to the vocal tract that enabled speaking. But although apes in captivity can learn some language by learning from humans, in the wild, they don't gesture nearly as much as human infants, making it difficult to tease out commonalities in language development that have biological versus environmental roots. © 2013 Discovery Communications, LLC
By Susan Milius Lyrebirds are famous for the mimicked sounds they sing, but they now have another claim to fame: They dance to their own songs. “Just as we waltz to waltz music but we salsa to salsa music, so lyrebirds perform different dance movements to different types of songs,” says Anastasia Dalziell of the Australian National University in Canberra. She and her colleagues scrutinized videos of male superb lyrebirds (Menura novaehollandiae) showing off in the wild for possible mates. The males’ combinations of hums, clicks, trills and other sexy syllables fell into four distinctive song types, the researchers say. At least the first three types are not mimicry but lyrebird originals, Dalziell says. In courtship, males sing the songs in a fairly predictable order and usually match each to its own mix of dance moves and postures. The birds side-step, turn and flare their outsized lyre-shaped tails. Matching a type of music with a style of gesture is not unique to humankind, the researchers report June 6 in Current Biology. Performing for females, a male lyrebird dances to the music he makes. And yes, the bird makes the noises heard in the video. © Society for Science & the Public 2000 - 2013
Karen Ravn Babies learn to babble before they learn to talk, at first simply repeating individual syllables (as in ba-ba-ba), and later stringing various syllables together (as in ba-da-goo). Songbirds exhibit similar patterns during song-learning, and the capacity for this sort of syllable sequencing is widely believed to be innate and to emerge full-blown — a theory that is challenged by a paper published on Nature's website today1. A study of three species — zebra finches, Bengalese finches and humans — reports that none of the trio has it that easy. Their young all have to learn how to string syllables together slowly, pair by pair. “We discovered a previously unsuspected stage in human vocal development,” says first author Dina Lipkind, a psychologist now at Hunter College in New York. The researchers began by training young zebra finches (Taeniopygia guttata) to sing a song in which three syllables represented by the letters A, B and C came in the order ABC–ABC. They then trained the birds to sing a second song in which the same syllables were strung together in a different order, ACB–ACB. Eight out of seventeen birds managed to learn the second song, but they did not do so in one fell swoop. They learned it as a series of syllable pairs, first, say, learning to go from A to C, then from C to B and finally from B to A. And they didn’t do it overnight, as the innate-sequencing theory predicts. Instead, on average, they learned the first pair in about ten days, the second in four days and the third in two days. © 2013 Nature Publishing Group
by Emily Underwood Without a way to forecast whether the early warning signs of autism will develop into severe impairment, parents of children with the disorder are left with one harrowing option: Wait and see. Now, a new study suggests that a distinct ripple of brain waves measured while toddlers listen to words can reliably predict how they will fare in a range of cognitive areas up to age 6—the longest-term forecast yet achieved. In addition to pointing toward more effective treatments, the discovery could help reveal how early social abilities facilitate the development of language. Many children with autism spectrum disorder (ASD) have begun to display telltale social and language deficits by the time they're toddlers; they fail to play or make eye contact with others, for example, or to say short sentences such as "drink milk." Although scientists have long considered the brain systems that govern these two types of deficits as separate, a growing body of evidence suggests that they are actually deeply intertwined, says Patricia Kuhl, a cognitive neuroscientist at the University of Washington, Seattle, and lead author of the new study. One of Kuhl's first important clues that social deficits might hinder language acquisition in autism came from her 2005 study of "Motherese"—the exaggerated, sing-song baby talk that parents instinctively shower on their children. When given the choice between listening to samples of Motherese or computer-generated tones, Kuhl found that preschoolers with autism "actually preferred the Robovoice," she says. This lack of interest in human speech not only correlated with the severity of a child's autistic symptoms, Kuhl notes, but with a lack of typical brain response to subtle changes in syllables, such as the switch from "ba" to "da." That's bad news, she says, because "picking up these tiny changes means the difference between learning language or not." © 2010 American Association for the Advancement of Science
by John Bohannon WASHINGTON, D.C.—People may grow wiser with age, but they don't grow smarter. Many of our mental abilities decline after midlife, and now researchers say that they've fingered a culprit. A study presented here last week at the annual meeting of the Association for Psychological Science points to microbleeding in the brain caused by stiffening arteries. The finding may lead to new therapies to combat senior moments. This isn't the first time that microbleeds have been suspected as a cause of cognitive decline. "We have known [about them] for some time thanks to neuroimaging studies," says Matthew Pase, a psychology Ph.D. student at Swinburne University of Technology in Melbourne, Australia. The brains of older people are sometimes peppered with dark splotches where blood vessels have burst and created tiny dead zones of tissue. How important these microbleeds are to cognitive decline, and what causes them, have remained open questions, however. Pase wondered if high blood pressure might be behind the microbleeds. The brain is a very blood-hungry organ, he notes. "It accounts for only 2% of the body weight yet receives 15% of the cardiac output and consumes 20% of the body's oxygen expenditure." Rather than getting the oxygen in pulses, the brain needs a smooth, continuous supply. So the aorta, the largest blood vessel branching off the heart, smooths out blood pressure before it reaches the brain by absorbing the pressure with its flexible walls. But as people age, the aorta stiffens. That translates to higher pressure on the brain, especially during stress. The pulse of blood can be strong enough to burst vessels in the brain, resulting in microbleeds. © 2010 American Association for the Advancement of Science
By Bruce Bower Chaser isn’t just a 9-year-old border collie with her breed’s boundless energy, intense focus and love of herding virtually anything. She’s a grammar hound. In experiments directed by her owner, psychologist John Pilley of Wofford College in Spartanburg, S.C., Chaser demonstrated her grasp of the basic elements of grammar by responding correctly to commands such as “to ball take Frisbee” and its reverse, “to Frisbee take ball.” The dog had previous, extensive training to recognize classes of words including nouns, verbs and prepositions. “Chaser intuitively discovered how to comprehend sentences based on lots of background learning about different types of words,” Pilley says. He reports the results May 13 in Learning and Motivation. Throughout the first three years of Chaser’s life, Pilley and a colleague trained the dog to recognize and fetch more than 1,000 objects by name. Using praise and play as reinforcements, the researchers also taught Chaser the meaning of different types of words, such as verbs and prepositions. As a result, Chaser learned that phrases such as “to Frisbee” meant that she should take whatever was in her mouth to the named object. Exactly how the dog gained her command of grammar is unclear, however. Pilley suspects that Chaser first mentally linked each of two nouns she heard in a sentence to objects in her memory. Then the canine held that information in mind while deciding which of two objects to bring to which of two other objects. Pilley’s work follows controversial studies of grammar understanding in dolphins and a pygmy chimp. © Society for Science & the Public 2000 - 2013
By ANDREW C. REVKIN Twenty-two months ago, I interrupted my nonstop reporting about paths toward a sustainable future for our species to focus on sustaining myself. The hiatus was not by choice, but was mandated by a stroke — the out-of-the-blue variant, the rare kind of “brain attack” (the term preferred by some neurologists) that is most often seen in otherwise healthy, youngish middle-aged people. It’s Fourth of July weekend, 2011 — a beautiful, if hot, morning for a run in the Hudson Valley woods with my son Daniel, back from brief service in the Israeli army. I’m eager to be pushed hard. I’m not even a lapsed middle-aged athlete; I’m truly negligent when it comes to exercise. We’re jogging up a steep path, and my breathing gets deeper and faster. At a particularly tough turn, I pause, hands on knees. “Come on, keep it up, Dad.” I’m panting but don’t want to disappoint. We press on. But I stop again, this time insisting that Daniel run ahead. I rest in the mottled shade and sunlight of the woods until he returns. Then I realize that through my left eye, the world appears paisley — as if I were looking through a patterned curtain. Something is really wrong. We make it back to the car. Daniel takes the wheel. Back home, I take a shower, thinking that cooling off will help. For the first time, a thought flickers. Could this be a stroke? Almost unconsciously, I take half a dozen baby aspirin. I know enough about aspirin’s blood-thinning properties to think this can’t hurt. Copyright 2013 The New York Times Company
Link ID: 18147 - Posted: 05.14.2013
by Helen Thomson "I was sitting on the toilet. I suddenly felt an explosion in the left side of my head and ended up on the floor. I think the only thing that kept me conscious was that I didn't want to be found with my pants down. Then the other side of my head went bang! I woke up in hospital and looked out of the window to see the tree was sprouting numbers. 3, 6, 9. Then I started talking in rhyme…" Ten days after having a subarachnoid haemorrhage – a stroke caused by bleeding in and around the brain – Tommy McHugh, an ex-con who'd been in his fair share of scraps, became a new man, with a personality that nobody recognised. When he was a young man, Tommy did time in prison. But after his stroke at age 51, everything changed. "I could taste the femininity inside of myself," he said. "My head was full of rhymes and images and pictures." Not only did he feel a sudden urge to write poetry, but he also began to paint and draw obsessively for up to 19 hours a day. He was never artistic before – in fact, he joked that he'd never even been in an art gallery "except to maybe steal something". Desperate to find out what was going on, Tommy wrote to several neuroscientists and end up working closely with Alice Flaherty at Harvard Medical School and Mark Lythgoe at University College London. © Copyright Reed Business Information Ltd.
by Elizabeth Norton If you've ever cringed when your parents said "groovy," you'll know that spoken language can have a brief shelf life. But frequently used words can persist for generations, even millennia, and similar sounds and meanings often turn up in very different languages. The existence of these shared words, or cognates, has led some linguists to suggest that seemingly unrelated language families can be traced back to a common ancestor. Now, a new statistical approach suggests that peoples from Alaska to Europe may share a linguistic forebear dating as far back as the end of the Ice Age, about 15,000 years ago. "Historical linguists study language evolution using cognates the way biologists use genes," explains Mark Pagel, an evolutionary theorist at the University of Reading in the United Kingdom. For example, although about 50% of French and English words derive from a common ancestor (like "mere" and "mother," for example), with English and German the rate is closer to 70%—indicating that while all three languages are related, English and German have a more recent common ancestor. In the same vein, while humans, chimpanzees, and gorillas have common genes, the fact that humans share almost 99% of their DNA with chimps suggests that these two primate lineages split apart more recently. Because words don't have DNA, researchers use cognates found in different languages today to reconstruct the ancestral "protowords." Historical linguists have observed that over time, the sounds of words tend to change in regular patterns. For example, the p sound frequently changes to f, and the t sound to th—suggesting that the Latin word pater is, well, the father of the English word father. Linguists use these known rules to work backward in time, making a best guess at how the protoword sounded. They also track the rate at which words change. Using these phylogenetic principles, some researchers have dated many common words as far back as 9000 years ago. The ancestral language known as Proto-Indo-European, for example, gave rise to languages including Hindi, Russian, French, English, and Gaelic. © 2010 American Association for the Advancement of Science.
By BILL PENNINGTON BOSTON — The drumbeat of alarming stories linking concussions among football players and other athletes to brain disease has led to a new and mushrooming American phenomenon: the specialized youth sports concussion clinic, which one day may be as common as a mall at the edge of town. In the last three years, dozens of youth concussion clinics have opened in nearly 35 states — outpatient centers often connected to large hospitals that are now filled with young athletes complaining of headaches, amnesia, dizziness or problems concentrating. The proliferation of clinics, however, comes at a time when there is still no agreed-upon, established formula for treating the injuries. “It is inexact, a science in its infancy,” said Dr. Michael O’Brien of the sports concussion clinic at Boston Children’s Hospital. “We know much more than we once did, but there are lots of layers we still need to figure out.” Deep concern among parents about the effects of concussions is colliding with the imprecise understanding of the injury. To families whose anxiety has been stoked by reports of former N.F.L. players with degenerative brain disease, the new facilities are seen as the most expert care available. That has parents parading to the clinic waiting rooms. The trend is playing out vividly in Boston, where the phone hardly stops ringing at the youth sports concussion clinic at Massachusetts General Hospital. “Parents call saying, ‘I saw a scary report about concussions on Oprah or on the ‘Doctors’ show or Katie Couric’s show,’ ” Dr. Barbara Semakula said, describing a typical day at the clinic. “Their child just hurt his head, and they’ve already leapt to the worst possible scenarios. It’s a little bit of a frenzy out there.” © 2013 The New York Times Company
by Lizzie Wade If you were a rat living in a completely virtual world like in the movie The Matrix, could you tell? Maybe not, but scientists studying your brain might be able to. Today, researchers report that certain cells in rat brains work differently when the animals are in virtual reality than when they are in the real world. The neurons in question are known as place cells, which fire in response to specific physical locations in the outside world and reside in the hippocampus, the part of the brain responsible for spatial navigation and memory. As you walk out of your house every day, the same place cell fires each time you reach the shrub that's two steps away from your door. It fires again when you reach the same place on your way back home, even though you are traveling in the opposite direction. Scientists have long suspected that these place cells help the brain generate a map of the world around us. But how do the place cells know when to fire in the first place? Previous research showed that the cells rely on three different kinds of information. First, they analyze "visual cues," or what you see when you look around. Then, there are what researchers call "self-motion cues." These cues come from how your body moves in space and are the reason you can still find your way around a room with the lights out. The final type of information is the "proximal cues," which encompass everything else about the environment you're in. The smell of a bakery on your way to work, the sounds of a street jammed with traffic, and the springy texture of grass in a park are all proximal cues. © 2010 American Association for the Advancement of Science.
By Christie Wilcox What does your voice say about you? Our voices communicate information far beyond what we say with our words. Like most animals, the sounds we produce have the potential to convey how healthy we are, what mood we’re in, even our general size. Some of these traits are important cues for potential mates, so much so that the sound of your voice can actually affect how good looking you appear to others. Which, really, brings up one darn good question: what makes a voice sound sexy? To find out, a team spearheaded by University College London researcher Xi Yu created synthetic male and female voices and altered their pitch, vocal quality and formant spacing (an acoustics term related to the frequencies of sound), the last of which is related to body size. They also adjusted the voices to be normal (relaxed), breathy, or pressed (tense). Through several listening experiments, they asked participants of the opposite gender to say which voice was the most attractive and which sounded the friendliest or happiest. The happiest-sounding voices were those with higher pitch, whether male or female, while the angriest were those with dense formants, indicating large body size. As for attractiveness, the men preferred a female voice that is high-pitched, breathy and had wide formant spacing, which indicates a small body size. The women, on the other hand, preferred a male voice with low pitch and dense formant spacing, indicative of larger size. But what really surprised the scientists is that women also preferred their male voices breathy. “The breathiness in the male voice attractiveness rating is intriguing,” explain the authors, “as it could be a way of neutralizing the aggressiveness associated with a large body size.”
Posted by Christy Ullrich Elephants may use a variety of subtle movements and gestures to communicate with one another, according to researchers who have studied the big mammals in the wild for decades. To the casual human observer, a curl of the trunk, a step backward, or a fold of the ear may not have meaning. But to an elephant—and scientists like Joyce Poole—these are signals that convey vital information to individual elephants and the overall herd. Biologist and conservationist Joyce Poole and her husband, Petter Granli, both of whom direct ElephantVoices, a charity they founded to research and advocate for conservation of elephants in various sanctuaries in Africa, have developed an online database decoding hundreds of distinct elephant signals and gestures. The postures and movements underscore the sophistication of elephant communication, they say. Poole and Granli have also deciphered the meaning of acoustic communication in elephants, interpreting the different rumbling, roaring, screaming, trumpeting, and other idiosyncratic sounds that elephants make in concert with postures such as the positioning and flapping of their ears. Poole has studied elephants in Africa for more than 37 years, but only began developing the online gestures database in the past decade. Some of her research and conservation work has been funded by the National Geographic Society. “I noticed that when I would take out guests visiting Amboseli [National Park in Kenya] and was narrating the elephants’ behavior, I got to the point where 90 percent of the time, I could predict what the elephant was about to do,” Poole said in an interview. “If they stood a certain way, they were afraid and were about to retreat, or [in another way] they were angry and were about to move toward and threaten another.” © 1996-2012 National Geographic Society.
By Lucy Wallis BBC News Abby and Brittany Hensel are conjoined twins determined to live the normal, active life of outgoing 20-somethings anywhere. They have been to university, they travel, they have jobs. But how easy is it for two people to inhabit one body? Like most 23-year-olds Abby and Brittany Hensel love spending time with their friends, going on holiday, driving, playing sport such as volleyball and living life to the full. The identical, conjoined twins from Minnesota, in the United States, have graduated from Bethel University and are setting out on their career as primary school teachers with an emphasis on maths. Although they have two teaching licences, there is one practical difference when it comes to the finances. "Obviously right away we understand that we are going to get one salary because we're doing the job of one person," says Abby. "As maybe experience comes in we'd like to negotiate a little bit, considering we have two degrees and because we are able to give two different perspectives or teach in two different ways." "One can be teaching and one can be monitoring and answering questions," says Brittany. "So in that sense we can do more than one person." Their friend Cari Jo Hohncke has always admired the sisters' teamwork. "They are two different girls, but yet they are able to work together to do the basic functions that I do every day that I take for granted," says Hohncke. BBC © 2013
Link ID: 18072 - Posted: 04.25.2013
By Meghan Holohan Need to remember some important facts for that big presentation at work? Clench your right hand while preparing to remember. When giving that talk, ball up your left hand and you’ll call to mind those details, no problem. That’s the finding from a new study authored by Ruth Propper, an associate professor and director of the cerebral lateralization laboratory at Montclair State University. Propper has long been intrigued by how body movements impact how the brain works. While most people realize that the brain influences the body (the brain tells your arm there is an itch, and you feel it), less is understood about how the body sways the brain. Past research suggests that clenching our hands can evoke emotions. When people ball up their right hands, for example, the left sides of their brains become more active, causing what’s known as “approach emotions,” feelings such as happiness or excitement. By squeezing the left hand, people engage the right side of the brain, which controls “withdrawal emotions” such as introversion, fear, or anxiety. (It probably seems like these might be less useful, but they come in handy in dangerous situations.) Propper theorized that if clenching hands impacted feelings, these gestures might influence the brain in other ways. © 2013 NBCNews.com
by Tanya Lewis, The lip-smacking vocalizations gelada monkeys make are surprisingly similar to human speech, a new study finds. Many nonhuman primates demonstrate lip-smacking behavior, but geladas are the only ones known to make undulating sounds, known as "wobbles," at the same time. (The wobbling sounds a little like a human hum would sound if the volume were being turned on and off rapidly.) The findings show that lip-smacking could have been an important step in the evolution of human speech, researchers say. "Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," Thore Bergman of the University of Michigan in Ann Arbor and author of the study published today (April 8) in the journal Current Biology,said in a statement. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speechlike sounds." NEWS: Lip Smacks of Monkeys Prelude to Speech? Lip-smacking -- rapidly opening and closing the mouth and lips -- shares some of the features of human speech, such as rapid fluctuations in pitch and volume. (See Video of Gelada Lip-Smacking) Bergman first noticed the similarity while studying geladas in the remote mountains of Ethiopia. He would often hear vocalizations that sounded like human voices, but the vocalizations were actually coming from the geladas, he said. He had never come across other primates who made these sounds. But then he read a study on macaques from 2012 revealing how facial movements during lip-smacking were very speech-like, hinting that lip-smacking might be an initial step toward human speech. © 2013 Discovery Communications, LLC.
By Janice Lynch Schuster, My grandmother, who is 92, recently reported that she’d seen three giraffes in her Midwest back yard. She is otherwise sharp (and also kind and funny), but the giraffe episode was further evidence of the mild cognitive impairment that has been slowly creeping into her life. The question for my family has become: How should we respond? One of my sisters tried humor. (“Grandmom, I didn’t know you drank in the middle of the day!”) My father suggested that they were deer (to which she replied, “I’m 92 years old, and I know a giraffe when I see one.”) I tried to learn more about what, exactly, the giraffes were doing out there. (She didn’t seem to know, saying only that “the light shimmered.”) Communicating with a family member who has cognitive impairment can be frustrating and disheartening, even downright depressing for patient and caregiver alike. And it’s a problem faced by a growing number of Americans. According to a report published last week, about 4.1 million Americans have dementia. Alzheimer’s, one of the many forms of dementia, is the most expensive disease in the United States, costing $157 billion to $215 billion a year — more than heart disease and cancer, according to the study, which was sponsored by the National Institute on Aging. As baby boomers reach old age, these numbers are expected to increase dramatically. A number of techniques can not only reduce the frustration but also create new ways of connecting. Among the most effective and popular among experts is the “validation method,” a practice pioneered by geriatric social worker and researcher Naomi Feil in the 1980s. © 1996-2013 The Washington Post
By Bruce Bower Babies take a critical step toward learning to speak before they can say a word or even babble. By 3 months of age, infants flexibly use three types of sounds — squeals, growls and vowel-like utterances — to express a range of emotions, from positive to neutral to negative, researchers say. Attaching sounds freely to different emotions represents a basic building block of spoken language, say psycholinguist D. Kimbrough Oller of the University of Memphis in Tennessee and his colleagues. Any word or phrase can signal any mental state, depending on context and pronunciation. Infants’ flexible manipulation of sounds to signal how they feel lays the groundwork for word learning, the scientists conclude April 1 in the Proceedings of the National Academy of Sciences. Language evolution took off once this ability emerged in human babies, Oller proposes. Ape and monkey researchers have mainly studied vocalizations that have one meaning, such as distress calls. “At this point, the conservative conclusion is that the human infant at 3 months is already vocally freer than has been demonstrated for any other primate at any age,” Oller says. Oller’s group videotaped infants playing and interacting with their parents in a lab room equipped with toys and furniture. Acoustic analyses identified nearly 7,000 utterances made by infants up to 1 year of age that qualified as laughs, cries, squeals, growls or vowel-like sounds. © Society for Science & the Public 2000 - 2013