Chapter 15. Language and Our Divided Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Associated Press It's the ape equivalent of Google Maps and Facebook. The night before a big trip, Arno the orangutan plots his journey and lets others know where he is going with a long, whooping call. What he and his orangutan buddies do in the forests of Sumatra tells scientists that advance trip planning and social networking aren't just human traits. A new study of 15 wild male orangutans finds that they routinely plot out their next-day treks and share their plans in long calls, so females can come by or track them, and competitive males can steer clear. The researchers closely followed the males as they traveled on 320 days during the 1990s. The results were published Wednesday in the journal PLoS One. Typically, an orangutan would turn and face in the direction of his route and let out a whoop, sometimes for as long as four minutes. Then he'd go to sleep and 12 hours later set on the heralded path, said study author Carel van Schaik, director of the Anthropological Institute at the University of Zurich. "This guy basically thinks ahead," van Schaik said. "They're continuously updating their Google Maps, so to speak. Based on that, they're planning what to do next." The apes didn't just call once - they kept at it, calling more than 1,100 times over the 320 days. © 2013 The Hearst Corporation
Ed Yong Listen very carefully in the rainforests of Brazil and you might hear a series of quiet, high-pitched squeaks. These are the alarm calls of the black-fronted titi (Callicebus nigrifrons), a monkey with a rusty-brown tail that lives in small family units. The cries are loaded with information. Cristiane Cäsar, a biologist at the University of St Andrews, UK, and her colleagues report that the titis mix and match two distinct calls to tell each other about the type of predator that endangers them, as well as the location of the threat. Her results are published in Biology Letters1. Cäsar's team worked with five groups of titis that live in a private nature reserve in the Minas Gerais region of Brazil. When the researchers placed a stuffed caracara — a bird of prey — in the treetops, the titis gave out A-calls, which have a rising pitch. When the animals saw a ground-based threat — represented by an oncilla, a small spotted cat — they produced B-calls, sounds with a falling pitch. However, when the team moved the predator models around, the monkeys tweaked their calls. If the caracara was on the ground, the monkeys started with at least four A-calls before adding B-calls into the mix. If the oncilla was in a tree, the monkeys made a single introductory A-call before switching to B-calls. “A single call doesn’t really tell the recipient what’s happening, but they can infer the type of predator and its location by listening to the first five or six calls,” says co-author Klaus Zuberbühler of the University of Neuchâtel in Switzerland. “The five different groups were almost unanimous in their response. There was no deviation.” © 2013 Nature Publishing Group
By Jason G. Goldman One of the key differences between humans and non-human animals, it is thought, is the ability to flexibly communicate our thoughts to others. The consensus has long been that animal communication, such as the food call of a chimpanzee or the alarm call of a lemur, is the result of an automatic reflex guided primarily by the inner physiological state of the animal. Chimpanzees, for example, can’t “lie” by producing a food call when there’s no food around and, it is thought, they can’t not emit a food call in an effort to hoard it all for themselves. By contrast, human communication via language is far more flexible and intentional. But recent research from across the animal kingdom has cast some doubt on the idea that animal communication always operates below the level of conscious control. Male chickens, for example, call more when females are around, and male Thomas langurs (a monkey native to Indonesia) continue shrieking their alarm calls until all females in their group have responded. Similarly, vervet monkeys are more likely sound their alarm calls when their are other vervet monkeys around, and less likely when they’re alone. The same goes for meerkats. And possibly chimps, as well. Still, these sorts of “audience effects” can be explained by lower-level physiological factors. In yellow-bellied marmots, small ground squirrels native to the western US and southwestern Canada, the production of an alarm call correlates with glucocorticoid production, a physiological measurement of stress. And when researchers experimentally altered the synthesis of glucocorticoids in rhesus macaques, they found a change in the probability of alarm call production. © 2013 Scientific American
by Nancy Shute It was hard to ignore those headlines saying that people with migraine have brain damage, even if you're not among the 12 percent or so who do suffer from these painful, recurring headaches. Don't panic, says the neurologist whose work sparked those alarming headlines. "It's still not something to stay up nights worrying about," says Dr. Richard Lipton, director of the Montefiore Headache Center in New York. But knowing about the brain anomalies that Lipton and his colleagues found might help people reduce their stroke risk. Some people who get do have a slightly . And some of the brain changes identified in the study look like mini-strokes. "On the MRI they look like very tiny strokes," Lipton tells Shots. But the people aren't having any stroke symptoms. Still, Lipton is convinced that the process is the same. "We now know it's a risk factor for these very small silent strokes," he says. The scientists evaluated data from 19 studies in which people with migraine headaches got MRI scans of their brains. Just about everybody is going to have some abnormalities show up in a scan. But the people who had migraines were more likely to have two common abnormalities: white matter abnormalities and infarct-like lesions. The were published in the journal Neurology. ©2013 NPR
by Jacob Aron DOES your brain work like a dictionary? A mathematical analysis of the connections between definitions of English words has uncovered hidden structures that may resemble the way words and their meanings are represented in our heads. "We want to know how the mental lexicon is represented in the brain," says Stevan Harnad of the University of Quebec in Montreal, Canada. As every word in a dictionary is defined in terms of others, the knowledge needed to understand the entire lexicon is there, as long as you first know the meanings of an initial set of starter, or "grounding", words. Harnad's team reasoned that finding this minimal set of words and pinning down its structure might shed light on how human brains put language together. The team converted each of four different English dictionaries into a mathematical structure of linked nodes known as a graph. Each node in this graph represents a word, which is linked to the other words used to define it – so "banana" might be connected to "long", "bendy", "yellow" and "fruit". These words then link to others that define them. This enabled the team to remove all the words that don't define any others, leaving what they call a kernel. The kernel formed roughly 10 per cent of the full dictionary – though the exact percentages depended on the particular dictionary. In other words, 90 per cent of the dictionary can be defined using just the other 10 per cent. © Copyright Reed Business Information Ltd.
Link ID: 18587 - Posted: 08.31.2013
Beth Skwarecki Be careful what you say around a pregnant woman. As a fetus grows inside a mother's belly, it can hear sounds from the outside world—and can understand them well enough to retain memories of them after birth, according to new research. It may seem implausible that fetuses can listen to speech within the womb, but the sound-processing parts of their brain become active in the last trimester of pregnancy, and sound carries fairly well through the mother's abdomen. "If you put your hand over your mouth and speak, that's very similar to the situation the fetus is in," says cognitive neuroscientist Eino Partanen of the University of Helsinki. "You can hear the rhythm of speech, rhythm of music, and so on." A 1988 study suggested that newborns recognize the theme song from their mother's favorite soap opera. More recent studies have expanded on the idea of fetal learning, indicating that newborns already familiarized themselves with sounds of their parent’s native language; one showed that American newborns seem to perceive Swedish vowel sounds as unfamiliar, sucking on a high-tech pacifier to hear more of the new sounds. Swedish infants showed the same response to English vowels. But those studies were based on babies' behaviors, which can be tricky to test. Partanen and his team decided instead to outfit babies with EEG sensors to look for neural traces of memories from the womb. "Once we learn a sound, if it's repeated to us often enough, we form a memory of it, which is activated when we hear the sound again," he explains. This memory speeds up recognition of sounds in the learner's native language and can be detected as a pattern of brain waves, even in a sleeping baby. © 2012 American Association for the Advancement of Science.
Virginia Morell A wolf’s howl is one of the most iconic sounds of nature, yet biologists aren’t sure why the animals do it. They’re not even sure if wolves howl voluntarily or if it’s some sort of reflex, perhaps caused by stress. Now, scientists working with captive North American timber wolves in Austria report that they’ve solved part of the mystery. Almost 50 years ago, wildlife biologists suggested that a wolf’s howls were a way of reestablishing contact with other pack members after the animals became separated, which often happens during hunts. Yet, observers of captive wolves have also noted that the pattern of howls differs depending on the size of the pack and whether the dominant, breeding wolf is present, suggesting that the canids’ calls are not necessarily automatic responses. Friederike Range, a cognitive ethologist at the University of Veterinary Medicine in Vienna, was in a unique position to explore the conundrum. Since 2008, she and her colleagues have hand-raised nine wolves at the Wolf Science Center in Ernstbrunn, which she co-directs. “We started taking our wolves for walks when they were 6 weeks old, and as soon as we took one out, the others would start to howl,” she says. “So immediately we became interested in why they howl.” Although the center’s wolves don’t hunt, they do howl differently in different situations, Range says. “So we also wanted to understand these variations in their howling.” © 2012 American Association for the Advancement of Science.
JoNel Aleccia TODAY When doctors told Pete and Michelle Gallagher that they wanted to remove half of their 3-year-old son’s brain, the Attica, Ohio, parents were horrified. But a new study shows the extreme procedure may offer some kids their best shot at a normal life. “We panicked,” said Pete Gallagher, recalling their reaction seven years ago. The couple also knew that the dramatic surgery known as a hemispherectomy might be the only workable option to stop the severe seizures, more than a dozen a day, that were robbing Aiden of his ability to function – and to learn. “He had forgotten his alphabet. He had forgotten how to count. It was all slipping,” the father said. Today, Aiden is a healthy, red-haired fifth-grader who goes to regular school and loves to play baseball and basketball. He hasn’t had a seizure since the rare operation, making the boy a poster child for new research that finds the procedure offers real-world success for children suffering from devastating epilepsy. “The brain has an amazing capacity to work around the function that it has lost,” said Dr. Ajay Gupta, head of pediatric epilepsy at the Cleveland Clinic. In the first large-scale study to look at the everyday capabilities of kids who undergo hemispherectomy, Gupta and his colleagues reviewed 186 operations performed at their center between 1997 and 2009 and took a close look at 115 patients. They confirmed what doctors knew, but had little practical data to support: That removing the diseased hemisphere of a seizure-prone brain allows sufferers to learn and grow and, in some cases, lead normal lives.
By Scott Barry Kaufman So yea, you know how the left brain is really realistic, analytical, practical, organized, and logical, and the right brain is so darn creative, passionate, sensual, tasteful, colorful, vivid, and poetic? No. Just no. Stop it. Please. Thoughtful cognitive neuroscientists such as Rex Jung, Darya Zabelina, Andreas Fink, John Kounios, Mark Beeman, Kalina Christoff, Oshin Vartanian, Jeremy Gray, Hikaru Takeuchi and others are on the forefront of investigating what actually happens in the brain during the creative process. And their findings are overturning conventional notions surrounding the neuroscience of creativity. The latest findings from the real neuroscience of creativity suggest that the right brain/left brain distinction is not the right one when it comes to understanding how creativity is implemented in the brain. Creativity does not involve a single brain region or single side of the brain. Instead, the entire creative process– from the initial burst of inspiration to the final polished product– consists of many interacting cognitive processes and emotions. Depending on the stage of the creative process, and what you’re actually attempting to create, different brain regions are recruited to handle the task. Importantly, many of these brain regions work as a team to get the job done, and many recruit structures from both the left and right side of the brain. In recent years, evidence has accumulated suggesting that “cognition results from the dynamic interactions of distributed brain areas operating in large-scale networks.” © 2013 Scientific American
By GRETCHEN REYNOLDS The start this month of high school and college football seasons across the country renews concerns about the issue of repeated head impacts and how to manage or, preferably, avoid concussions. Unfortunately, the resources to deal with the problem remain limited. Newly released, state-of-the-art football helmets, for instance, may measure how much force each player’s head is absorbing and relay that data via telemetry to trainers on the sidelines, but at $1,500 or so per helmet, they are unattainable for most teams. Which is why a study published recently in The British Journal of Sports Medicine is so appealing. Eminently practical, it offers a means by which any team, no matter how small or cash-strapped, can assess the likelihood of one of its players having sustained an on-field concussion. It also celebrates a nifty, D.I.Y., MacGyver-ish sensibility rarely seen in our technology-obsessed times. The study’s authors began with the simple idea that, to manage sports-related concussions, “you need to be able to quickly and easily assess” whether a given player has actually sustained one, said Steven P. Broglio, director of the University of Michigan’s NeuroSport Research Laboratory and co-author of the study. Not every head impact results in a concussion. One means of assessing concussion status, Dr. Broglio continued, is to look at a player’s reaction time, since it is known to increase immediately after a concussion. A variety of scientifically validated tools exist to measure players’ reaction times, but most require a computer and sophisticated software, and are not practicable on the sidelines or in the budgets of many teams. Copyright 2013 The New York Times Company
by Sara Reardon It's a case of hear no object, see no object. Hearing the name of an object appears to influence whether or not we see it, suggesting that hearing and vision might be even more intertwined than previously thought. Studies of how the brain files away concepts suggest that words and images are tightly coupled. What is not clear, says Gary Lupyan of the University of Wisconsin in Madison, is whether language and vision work together to help you interpret what you're seeing, or whether words can actually change what you see. Lupyan and Emily Ward of Yale University used a technique called continuous flash suppression (CFS) on 20 volunteers to test whether a spoken prompt could make them detect an image that they were not consciously aware they were seeing. CFS works by displaying different images to the right and left eyes: one eye might be shown a simple shape or an animal, for example, while the other is shown visual "noise" in the form of bright, randomly flickering shapes. The noise monopolises the brain, leaving so little processing power for the other image that the person does not consciously register it, making it effectively invisible. Wheels of perception In a series of CFS experiments, the researchers asked volunteers whether or not they could see a specific object, such as a dog. Sometimes it was displayed, sometimes not. When it was not displayed or when the image was of another animal such as a zebra or kangaroo, the volunteers typically reported seeing nothing. But when a dog was displayed and the question mentioned a dog, the volunteers were significantly more likely to become aware of it. "If you hear a word, that greases the wheels of perception," says Lupyan: the visual system becomes primed for anything to do with dogs. © Copyright Reed Business Information Ltd.
By Michelle Roberts Health editor, BBC News online Brain scans may allow detection of dyslexia in pre-school children even before they start to read, say researchers. A US team found tell-tale signs on scans that have already been seen in adults with the condition. And these brain differences could be a cause rather than a consequence of dyslexia - something unknown until now - the Journal of Neuroscience reports. Scans could allow early diagnosis and intervention, experts hope. The part of the brain affected is called the arcuate fasciculus. Among the 40 school-entry children they studied they found some had shrinkage of this brain region, which processes word sounds and language. They asked the same children to do several different types of pre-reading tests, such as trying out different sounds in words. Those children with a smaller arcuate fasciculus had lower scores. It is too early to say if the structural brain differences found in the study are a marker of dyslexia. The researchers plan to follow up groups of children as they progress through school to determine this. Lead researcher Prof John Gabrieli said: "We don't know yet how it plays out over time, and that's the big question. BBC © 2013
Roger Dobson Older male nightingales have perfected an art that would be the envy of men having a mid-life crisis: a trick that makes them more attractive to females than their younger male competitors. Their mastery of successful courtship is achieved with a dazzling array of up to 100 trills a second, far more than their younger competitors can manage, and more than any other investigated bird, according to new research. That ability, backed up by a sophisticated playlist of about 200 songs, means that they are probably seen as better mates by young trill-seeking females. Singing so many trills at peak frequency requires a lot of physical effort and, as a result, it has evolved as a sign on fitness, say the researchers. "Females could assess the age of the male singer by the trill rate, and mate preferably with older ones," says the zoologist Dr Valentin Amrhein, who led the study at the University of Basel, Switzerland. "This makes sense for the females because older males have more experience with defending their territory or with raising young, and therefore have a better reproductive performance." The research, being published in the Journal of Avian Biology, shows that older birds can come up with 100 trills a second, making them the fastest singers. They also performed about 200 different song types, but the researchers think it is the immediate impact of the trills that is attracting the females. It would take more than an hour for the male to go through his whole song list. "Since the performance of these sounds is very demanding, the rate at which they can be repeated is limited. Trying to sing rapidly increasing sounds in fast repetition is very hard for us humans as well," says Dr Amrhein. "Singing rapid broadband trills comes at a certain price for the male nightingale, so trilling is a good indicator for mate quality." © independent.co.uk
Jason Bruck Ever been at a party where you recognize everyone’s faces but can’t think of their names? That wouldn’t happen if you were a bottlenose dolphin (Tursiops truncatus). The marine mammals can remember each other’s signature contact whistles—calls that function as names—for more than 20 years, the longest social memory ever recorded for a nonhuman animal, according to a new study. “The ability to remember individuals is thought to be extremely important to the ‘social brain,’ ” says Janet Mann, a marine mammal biologist at Georgetown University in Washington, D.C., who was not involved in the research. Yet, she notes, no one has succeeded in designing a test for this talent in the great apes—our closest kin—let alone in dolphins. Dolphins use their signature whistles to stay in touch. Each has its own unique whistle, and they learn and can repeat the whistles of other dolphins. A dolphin will answer when another dolphin mimics its whistle—just as we reply when someone calls our name. The calls enable the marine mammals to communicate over long distances—which is necessary because they live in “fission-fusion” societies, meaning that dolphins in one group split off to join other groups and later return. By whistling, they’re able to find each other again. Scientists don’t know how long dolphins are separated in the wild, but they do know the animals can live almost 50 years. So how long do the dolphins remember the calls of their friends? To find out, Jason Bruck, a cognitive ethologist at the University of Chicago in Illinois, spent 5 years collecting 71 whistles from 43 dolphins at six captive facilities, including Brookfield Zoo near Chicago and Dolphin Quest in Bermuda. The six sites belong to a consortium that rotates the marine mammals for breeding and has decades-long records of which dolphins have lived together. © 2012 American Association for the Advancement of Science
By Meeri Kim, Dizziness, vertigo and nausea are common symptoms of an inner-ear infection. But they can also be signs of a stroke. For doctors, especially those working in emergency rooms, quickly and accurately making the distinction is vital. But basic diagnostic tools, including the otoscope and simple eye-movement tests, are far from definitive. As a result, many doctors resort to a pricey imaging test such as a CT scan or an MRI. Nearly half of the 4 million people who visit U.S. emergency rooms each year with dizziness are given an MRI or CT scan, according to a study issued last month. Only about 3 percent of those 4 million people are actually having strokes. Why did the physical therapist’s staff push him to make more visits? Hefty insurance payments, perhaps. For the 25 percent of strokes that restrict blood flow to the back portions of the brain, CT scans are a poor diagnostic tool, according to the study’s leader, David Newman-Toker, an associate professor of neurology and otolaryngology at the Johns Hopkins University School of Medicine. “CT scans are so bad at detecting [these strokes] that they miss about 85 percent of them” in the first day after symptoms begin, he said. “That’s pretty close to useless.” Even MRIs miss almost 20 percent of strokes if the test is done within the first 24 hours. A new device offers a promising option for rooting out the cause of dizziness: eye-tracking goggles. © 1996-2013 The Washington Post
By Michele Solis Attention training might trump language practice in treating dyslexia, and video games might provide just that, according to a recent study in Current Biology. Researchers at the University of Padua in Italy found that 10 kids with dyslexia who played an action-filled video game for nine 80-minute sessions increased their reading speed, without introducing mistakes. These reading gains lasted at least two months and outpaced gains measured in 10 children with dyslexia who played a nonaction version of the same game, as well as trumping the expected improvement that naturally occurs in a year for a child with dyslexia. Though small, the study bolsters evidence that dyslexia stems in part from problems in focusing attention onto letters and words in an orderly way. Last year the same team reported that preschoolers who struggled to quickly and accurately shift their attention—which can be thought of as a spotlight—were likely to have reading difficulties three years later. Because action video games require players to constantly redirect their attention to different targets, neuroscientist Simone Gori and his colleagues thought the video games might fine-tune that spotlight so as to avoid jumbling letters on a page. The training honed visual attention skills and reading hand in hand, and the reading improvements even exceeded those obtained in children after traditional therapies for dyslexia, which focus on building language skills. Gori does not advocate abandoning the older methods but says that training visual attention could be a vital, overlooked component. He also notes that kids are prone to quit traditional dyslexia therapies, which he says can be demanding and even boring; not a problem in his video-game experiment. “Our difficulty was in getting the kids to stop playing,” Gori says. © 2013 Scientific American
Link ID: 18450 - Posted: 08.03.2013
By Julie Hecht AFTER A LONG DAY of being a dog, no dog in existence has ever curled up on a comfy couch to settle in with a good book. Dogs just don’t roll like that. But that shouldn’t imply that human words don’t or can’t have meaning for dogs. Chaser, a Border Collie from South Carolina, first entered the news in 2011 when a Behavioral Processes paper reported she had learned and retained the distinct names of over 1,000 objects. But that’s not all. When tested on the ability to associate a novel word with an unfamiliar item, she could do that, too. She also learned that different objects fell into different categories: certain things are general “toys,” while others are the more specific “Frisbees” and, of course, there are many, many exciting “balls.” She differentiates between object labels and action commands, interpreting “fetch sock” as two separate words, not as the single phrase “fetchsock.” Fast forward two years. Chaser and her owner and trainer Dr. John Pilley, an emeritus professor of psychology at Wofford College, appeared again in a scientific journal. This time, the study highlighted Chaser’s attention to the syntactical relationships between words, for example, differentiating “to ball take Frisbee” from “to Frisbee take ball.” I’ve been keeping an eye on Chaser, and I’ve been keeping an eye on Rico, Sofia, Bailey, Paddy and Betsy, all companion dogs whose way with human language has been reported in scientific journals. Most media reports tend to focus on outcomes: what these dogs can — or can’t — do with our words. But I think these reports are missing the point. Learning the names of over 1,000 words doesn’t just happen overnight. What does the behind-the-scenes learning and training look like? How did Chaser develop this intimate relationship with human language? © 2013 Scientific American
By Glen Tellis, Rickson C. Mesquita, and Arjun G. Yodh Terrence Murgallis, a 20 year-old undergraduate student in the Department of Speech-Language Pathology at Misericordia University has stuttered all his life and approached us recently about conducting brain research on stuttering. His timing was perfect because our research group, in collaboration with a team led by Dr. Arjun Yodh in the Department of Physics and Astronomy at the University of Pennsylvania, had recently deployed two novel optical methods to compare blood flow and hemoglobin concentration differences in the brains of those who stutter with those who are fluent. These noninvasive methods employ diffusing near-infrared light and have been dubbed near-infrared spectroscopy (NIRS) for concentration dynamics, and diffuse correlation spectroscopy (DCS) for flow dynamics. The near-infrared light readily penetrates through intact skull to probe cortical regions of the brain. The low power light has no known side-effects and has been successfully utilized for a variety of clinical studies in infants, children, and adults. DCS measures fluctuations of scattered light due to moving targets in the tissue (mostly red blood cells). The technique measures relative changes in cerebral blood flow. NIRS uses the relative transmission of different colors of light to detect hemoglobin concentration changes in the interrogated tissues. Though there are numerous diagnostic tools available to study brain activity, including positron emission tomography (PET), magnetic resonance imaging (MRI), and magnetoencephalography (MEG), these methods are often invasive and/or expensive to administer. In the particular case of electroencephalography (EEG), its low spatial resolution is a significant limitation for investigations of verbal fluency. © 2013 Scientific American
Link ID: 18426 - Posted: 07.30.2013
By Michelle Warwicker BBC Nature Individual wild wolves can be recognised by just their howls with 100% accuracy, a study has shown. The team from Nottingham Trent University, UK, developed a computer program to analyse the vocal signatures of eastern grey wolves. Wolves roam huge home ranges, making it difficult for conservationists to track them visually. But the technology could provide a way for experts to monitor individual wolves by sound alone. "Wolves howl a lot in the wild," said PhD student Holly Root-Gutteridge, who led the research. "Now we can be sure... exactly which wolf it is that's howling." The team's findings are published in the journal Bioacoustics. Wolves use their distinctive calls to protect territory from rivals and to call to other pack members. "They enjoy it as a group activity," said Ms Root-Gutteridge, "When you get a chorus howl going they all join in." The team's computer program is unique because it analyses both volume (or amplitude) and pitch (or frequency) of wolf howls, whereas previously scientists had only examined the animals' pitch. "Think of [pitch] as the note the wolf is singing," explained Ms Root-Gutteridge. "What we've added now is the amplitude - or volume - which is basically how loud it's singing at different times." "It's a bit like language: If you put the stress in different places you form a different sound." BBC © 2013
By Rebecca Morelle Science reporter, BBC World Service Scientists have found further evidence that dolphins call each other by "name". Research has revealed that the marine mammals use a unique whistle to identify each other. A team from the University of St Andrews in Scotland found that when the animals hear their own call played back to them, they respond. The study is published in the Proceedings of the National Academy of Sciences. Dr Vincent Janik, from the university's Sea Mammal Research Unit, said: "(Dolphins) live in this three-dimensional environment, offshore without any kind of landmarks and they need to stay together as a group. "These animals live in an environment where they need a very efficient system to stay in touch." It had been-long suspected that dolphins use distinctive whistles in much the same way that humans use names. Previous research found that these calls were used frequently, and dolphins in the same groups were able to learn and copy the unusual sounds. But this is the first time that the animals response to being addressed by their "name" has been studied. To investigate, researchers recorded a group of wild bottlenose dolphins, capturing each animal's signature sound. BBC © 2013