Chapter 13. Memory, Learning, and Development
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By BENEDICT CAREY Jerome S. Bruner, whose theories about perception, child development and learning informed education policy for generations and helped launch the modern study of creative problem solving, known as the cognitive revolution, died on Sunday at his home in Manhattan. He was 100. His death was confirmed by his partner, Eleanor M. Fox. Dr. Bruner was a researcher at Harvard in the 1940s when he became impatient with behaviorism, then a widely held theory, which viewed learning in terms of stimulus and response: the chime of a bell before mealtime and salivation, in Ivan Pavlov’s famous dog experiments. Dr. Bruner believed that behaviorism, rooted in animal experiments, ignored many dimensions of human mental experience. In one 1947 experiment, he found that children from low-income households perceived a coin to be larger than it actually was — their desires apparently shaping not only their thinking but also the physical dimensions of what they saw. In subsequent work, he argued that the mind is not a passive learner — not a stimulus-response machine — but an active one, bringing a full complement of motives, instincts and intentions to shape comprehension, as well as perception. His writings — in particular the book “A Study of Thinking” (1956), written with Jacqueline J. Goodnow and George A. Austin — inspired a generation of psychologists and helped break the hold of behaviorism on the field. To build a more complete theory, he and the experimentalist George A. Miller, a Harvard colleague, founded the Center for Cognitive Studies, which supported investigation into the inner workings of human thought. Much later, this shift in focus from behavior to information processing came to be known as the cognitive revolution. © 2016 The New York Times Company
Keyword: Development of the Brain
Link ID: 22300 - Posted: 06.09.2016
By Sandra G. Boodman Richard McGhee and his family believed the worst was behind them. McGhee, a retired case officer at the Defense Intelligence Agency who lives near Annapolis, had spent six months battling leukemia as part of a clinical trial at MD Anderson Cancer Center in Houston. The experimental chemotherapy regimen he was given had worked spectacularly, driving his blood cancer into a complete remission. But less than nine months after his treatment ended, McGhee abruptly fell apart. He became moody, confused and delusional — even childish — a jarring contrast with the even-keeled, highly competent person he had been. He developed tremors in his arms, had trouble walking and became incontinent. “I was really a mess,” he recalled. Doctors suspected he had developed a rapidly progressive and fatal dementia, possibly a particularly aggressive form of Alzheimer’s disease. If that was the case, his family was told, his life span would be measured in months. Luckily, the cause of McGhee’s precipitous decline proved to be much more treatable — and prosaic — than doctors initially feared. “It’s really a pleasure to see somebody get better so rapidly,” said Michael A. Williams, a professor of neurology and neurosurgery at the University of Washington School of Medicine in Seattle. Until recently, Williams was affiliated with Baltimore’s Sinai Hospital, where he treated McGhee in 2010. “This was a diagnosis waiting to be found.”
By Clare Wilson We’ve all been there: after a tough mental slog your brain feels as knackered as your body does after a hard workout. Now we may have pinpointed one of the brain regions worn out by a mentally taxing day – and it seems to also affect our willpower, so perhaps we should avoid making important decisions when mentally fatigued. Several previous studies have suggested that our willpower is a finite resource, and if it gets depleted in one way – like finishing a difficult task – we find it harder to make other good choices, like resisting a slice of cake. In a small trial, Bastien Blain at INSERM in Paris and his colleagues asked volunteers to spend six hours doing tricky memory tasks, while periodically choosing either a small sum of cash now, or a larger amount after a delay. .. As the day progressed, people became more likely to act on impulse and to pick an immediate reward. This didn’t happen in the groups that spent time doing easier memory tasks, reading or gaming. For those engaged in difficult work, fMRI brain scans showed a decrease in activity in the middle frontal gyrus, a brain area involved in decision-making. “That suggests this region is becoming less excitable, which could be impairing people’s ability to resist temptation,” says Blain. It’s involved in decisions like ‘Shall I have a beer with my friends tonight, or shall I save money to buy a bike next month,’ he says. Previous research has shown that children with more willpower in a similar type of choice test involving marshmallows end up as more successful adults, by some measures. “Better impulse control predicts your eventual wealth and health,” says Blain. The idea that willpower can be depleted is contentious as some researchers have failed to replicate others’ findings. © Copyright Reed Business Information Ltd.
By Jordana Cepelewicz Colors exist on a seamless spectrum, yet we assign hues to discrete categories such as “red” and “orange.” Past studies have found that a person's native language can influence the way colors are categorized and even perceived. In Russian, for example, light blue and dark blue are named as different colors, and studies find that Russian speakers can more readily distinguish between the shades. Yet scientists have wondered about the extent of such verbal influence. Are color categories purely a construct of language, or is there a physiological basis for the distinction between green and blue? A new study in infants suggests that even before acquiring language, our brain already sorts colors into the familiar groups. A team of researchers in Japan tracked neural activity in 12 prelinguistic infants as they looked at a series of geometric figures. When the shapes' color switched between green and blue, activity increased in the occipitotemporal region of the brain, an area known to process visual stimuli. When the color changed within a category, such as between two shades of green, brain activity remained steady. The team found the same pattern in six adult participants. The infants used both brain hemispheres to process color changes. Language areas are usually in the left hemisphere, so the finding provides further evidence that color categorization is not entirely dependent on language. At some point as a child grows, language must start playing a role—just ask a Russian whether a cloudless sky is the same color as the deep sea. The researchers hope to study that developmental process next. “Our results imply that the categorical color distinctions arise before the development of linguistic abilities,” says Jiale Yang, a psychologist at Chuo University and lead author of the study, published in February in PNAS. “But maybe they are later shaped by language learning.” © 2016 Scientific American
James Gorman Fruit flies are far from human, but not as far as you might think. They do many of the same things people do, like seek food, fight and woo mates. And their brains, although tiny and not set up like those of humans or other mammals, do many of the same things that all brains do — make and use memories, integrate information from the senses, and allow the creature to navigate both the physical and the social world. Consequently, scientists who study how all brains work like to use flies because it’s easier for them to do invasive research that isn’t allowed on humans. The technology of neuroscience is sophisticated enough to genetically engineer fly brains, and to then use fluorescent chemicals to indicate which neurons are active. But there are some remaining problems, like how to watch the brain of a fly that is moving around freely. It is one thing to record what is going on in a fly’s brain if the insect’s movement is restricted, but quite another to try to catch the light flash of brain cells from a fly that is walking around. Takeo Katsuki, an assistant project scientist at the Kavli Institute at the University of California, San Diego, is interested in courtship. And, he said, fruit flies simply won’t engage in courtship when they are tethered. So he and Dhruv Grover, another assistant project scientist, and Ralph J. Greenspan, in whose lab they both work, set out to develop a method for recording the brain activity of a walking fly. One challenge was to track the fly as it moved. They solved that problem with three cameras to follow the fly and a laser to activate the fluorescent chemicals in the brain. © 2016 The New York Times Company
By Julia Shaw A cure for almost every memory ailment seems to be just around the corner. Alzheimer’s affected brains can have their memories restored, we can create hippocampal implants to give us better memory, and we can effectively implant false memories with light. Except that we can’t really do any of these things, at least not in humans. We sometimes forget that developments in memory science need to go through a series of stages in order to come to fruition, each of which requires tremendous knowledge and skill. From coming up with a new idea, to designing an appropriate methodology, obtaining ethical approval, getting research funding, recruiting research assistants and test subjects, conducting the experiment(s), completing complex statistical analysis for which computer code is often required, writing a manuscript, surviving the peer review process, and finally effectively distributing the findings, each part of the process is incredibly complex and takes a long time. On top of it all, this process, which can take decades to complete, typically results in incremental rather than monumental change. Rather than creating massive leaps in technology, in the vast majority of instances, studies add a teeny tiny bit of insight to the greater body of knowledge. These incremental achievements in science are often blown out of proportion by the media. As John Oliver recently said “…[Science] deserves better than to be twisted out of proportion and be turned into morning show gossip.” Moving from science fiction to science fact is harder than the media makes it seem. © 2016 Scientific American,
By Andy Coghlan People once dependent on wheelchairs after having a stroke are walking again since receiving injections of stem cells into their brains. Participants in the small trial also saw improvements in their speech and arm movements. “One 71-year-old woman could only move her left thumb at the start of the trial,” says Gary Steinberg, a neurosurgeon at Stanford University who performed the procedure on some of the 18 participants. “She can now walk and lift her arm above her head.” Run by SanBio of Mountain View, California, this trial is the second to test whether stem cell injections into patients’ brains can help ease disabilities resulting from stroke. Patients in the first, carried out by UK company ReNeuron, also showed measurable reductions in disability a year after receiving their injections and beyond. All patients in the latest trial showed improvements. Their scores on a 100-point scale for evaluating mobility – with 100 being completely mobile – improved on average by 11.4 points, a margin considered to be clinically meaningful for patients. “The most dramatic improvements were in strength, coordination, ability to walk, the ability to use hands and the ability to communicate, especially in those whose speech had been damaged by the stroke,” says Steinberg. In both trials, improvements in patients’ mobility had plateaued since having had strokes between six months and three years previously. © Copyright Reed Business Information Ltd
By NICHOLAS ST. FLEUR Nine scientists have won this year’s Kavli Prizes for work that detected the echoes of colliding black holes, revealed how adaptable the nervous system is, and created a technique for sculpting structures on the nanoscale. The announcement was made on Thursday by the Norwegian Academy of Science Letters in Oslo, and was live-streamed to a watching party in New York as a part of the World Science Festival. The three prizes, each worth $1 million and split among the recipients, are awarded in astrophysics, nanoscience and neuroscience every two years. They are named for Fred Kavli, a Norwegian-American inventor, businessman and philanthropist who started the awards in 2008 and died in 2013. Eve Marder of Brandeis University, Michael M. Merzenich of the University of California, San Francisco, and Carla J. Shatz of Stanford won the neuroscience prize. Dr. Marder illuminated the flexibility and stability of the nervous system through her work studying crabs and lobsters and the neurons that control their digestion. Dr. Merzenich was a pioneer in the study of neural plasticity, demonstrating that parts of the adult brain, like those of children, can be reorganized by experience. Dr. Shatz showed that “neurons that fire together wire together,” by investigating how patterns of activity sculpt the synapses in the developing brain. The winners will receive their prizes in September at a ceremony in Oslo. © 2016 The New York Times Company
Keyword: Development of the Brain
Link ID: 22279 - Posted: 06.04.2016
By Simon Makin Other species are capable of displaying dazzling feats of intelligence. Crows can solve multistep problems. Apes display numerical skills and empathy. Yet, neither species has the capacity to conduct scientific investigations into other species' cognitive abilities. This type of behavior provides solid evidence that humans are by far the smartest species on the planet. Besides just elevated IQs, however, humans set themselves apart in another way: Their offspring are among the most helpless of any species. A new study, published recently in Proceedings of the National Academy of Sciences (PNAS), draws a link between human smarts and an infant’s dependency, suggesting one thing led to the other in a spiraling evolutionary feedback loop. The study, from psychologists Celeste Kidd and Steven Piantadosi at the University of Rochester, represents a new theory about how humans came to possess such extraordinary smarts. Like a lot of evolutionary theories, this one can be couched in the form of a story—and like a lot of evolutionary stories, this one is contested by some scientists. Kidd and Piantadosi note that, according to a previous theory, early humans faced selection pressures for both large brains and the capacity to walk upright as they moved from forest to grassland. Larger brains require a wider pelvis to give birth whereas being bipedal limits the size of the pelvis. These opposing pressures—biological anthropologists call them the “obstetric dilemma”—could have led to giving birth earlier when infants’ skulls were still small. Thus, newborns arrive more immature and helpless than those of most other species. Kidd and Piantadosi propose that, as a consequence, the cognitive demands of child care increased and created evolutionary pressure to develop higher intelligence. © 2016 Scientific American
By David Z. Hambrick If you’re a true dog lover, you take it as one of life’s simple truths that all dogs are good, and you have no patience for scientific debate over whether dogs really love people. Of course they do. What else could explain the fact that your dog runs wildly in circles when you get home from work, and, as your neighbors report, howls inconsolably for hours on end when you leave? What else could explain the fact that your dog insists on sleeping in your bed, under the covers—in between you and your partner? At the same time, there’s no denying that some dogs are smarter than others. Not all dogs can, like a border collie mix named Jumpy, do a back flip, ride a skateboard, and weave through pylons on his front legs. A study published in the journal Intelligence by British psychologists Rosalind Arden and Mark Adams confirms as much. Consistent with over a century of research on human intelligence, Arden and Adams found that a dog that excels in one test of cognitive ability will likely excel in other tests of cognitive ability. In more technical terms, the study reveals that there is a general factor of intelligence in dogs—a canine “g” factor. For their study, Arden and Adams devised a battery of canine cognitive ability tests. All of the tests revolved around—you guessed it—getting a treat. In the detour test, the dog’s objective was to navigate around barriers arranged in different configurations to get to a treat. In the point-following test, a researcher pointed to one of two inverted beakers concealing a treat, and recorded whether the dog went to that beaker or the other one. Finally, the quantity discrimination test required the dog to choose between a small treat (a glob of peanut butter) and a larger one (the “correct” answer). Arden and Adams administered the battery to 68 border collies from Wales; all had been bred and trained to do herding work on a farm, and thus had similar backgrounds. © 2016 Scientific American
By Gretchen Reynolds A weekly routine of yoga and meditation may strengthen thinking skills and help to stave off aging-related mental decline, according to a new study of older adults with early signs of memory problems. Most of us past the age of 40 are aware that our minds and, in particular, memories begin to sputter as the years pass. Familiar names and words no longer spring readily to mind, and car keys acquire the power to teleport into jacket pockets where we could not possibly have left them. Some weakening in mental function appears to be inevitable as we age. But emerging science suggests that we might be able to slow and mitigate the decline by how we live and, in particular, whether and how we move our bodies. Past studies have found that people who run, weight train, dance, practice tai chi, or regularly garden have a lower risk of developing dementia than people who are not physically active at all. There also is growing evidence that combining physical activity with meditation might intensify the benefits of both pursuits. In an interesting study that I wrote about recently, for example, people with depression who meditated before they went for a run showed greater improvements in their mood than people who did either of those activities alone. But many people do not have the physical capacity or taste for running or other similarly vigorous activities. So for the new study, which was published in April in the Journal of Alzheimer’s Disease, researchers at the University of California, Los Angeles, and other institutions decided to test whether yoga, a relatively mild, meditative activity, could alter people’s brains and fortify their ability to think. © 2016 The New York Times Company
Keyword: Learning & Memory
Link ID: 22270 - Posted: 06.01.2016
By Gary Stix Scientists will never find a single gene for depression—nor two, nor 20. But among the 20,000 human genes and the hundreds of thousands of proteins and molecules that switch on those genes or regulate their activity in some way, there are clues that point to the roots of depression. Tools to identify biological pathways that are instrumental in either inducing depression or protecting against it have recently debuted—and hold the promise of providing leads for new drug therapies for psychiatric and neurological diseases. A recent paper in the journal Neuron illustrates both the dazzling complexity of this approach and the ability of these techniques to pinpoint key genes that may play a role in governing depression. Scientific American talked with the senior author on the paper—neuroscientist Eric Nestler from the Icahn School of Medicine at Mt. Sinai in New York. Nestler spoke about the potential of this research to break the logjam in pharmaceutical research that has impeded development of drugs to treat brain disorders. Scientific American: The first years in the war on cancer met with a tremendous amount of frustration. Things look like they're improving somewhat now for cancer. Do you anticipate a similar trajectory may occur in neuroscience for psychiatric disorders? Eric Nestler: I do. I just think it will take longer. I was in medical school 35 years ago when the idea that identifying a person's specific pathophysiology was put forward as a means of directing treatment of cancer. We're now three decades later finally seeing the day when that’s happening. I definitely think the same will occur for major brain disorders. The brain is just more complicated and the disorders are more complicated so it will take longer. © 2016 Scientific American
By Anil Ananthaswamy and Alice Klein Our brain’s defence against invading microbes could cause Alzheimer’s disease – which suggests that vaccination could prevent the condition. Alzheimer’s disease has long been linked to the accumulation of sticky plaques of beta-amyloid proteins in the brain, but the function of plaques has remained unclear. “Does it play a role in the brain, or is it just garbage that accumulates,” asks Rudolph Tanzi of Harvard Medical School. Now he has shown that these plaques could be defences for trapping invading pathogens. Working with Robert Moir at the Massachusetts General Hospital in Boston, Tanzi’s team has shown that beta-amyloid can act as an anti-microbial compound, and may form part of our immune system. .. To test whether beta-amyloid defends us against microbes that manage to get into the brain, the team injected bacteria into the brains of mice that had been bred to develop plaques like humans do. Plaques formed straight away. “When you look in the plaques, each one had a single bacterium in it,” says Tanzi. “A single bacterium can induce an entire plaque overnight.” Double-edged sword This suggests that infections could be triggering the formation of plaques. These sticky plaques may trap and kill bacteria, viruses or other pathogens, but if they aren’t cleared away fast enough, they may lead to inflammation and tangles of another protein, called tau, causing neurons to die and the progression towards © Copyright Reed Business Information Ltd.
Robert Plomin, Scientists have investigated this question for more than a century, and the answer is clear: the differences between people on intelligence tests are substantially the result of genetic differences. But let's unpack that sentence. We are talking about average differences among people and not about individuals. Any one person's intelligence might be blown off course from its genetic potential by, for example, an illness in childhood. By genetic, we mean differences passed from one generation to the next via DNA. But we all share 99.5 percent of our three billion DNA base pairs, so only 15 million DNA differences separate us genetically. And we should note that intelligence tests include diverse examinations of cognitive ability and skills learned in school. Intelligence, more appropriately called general cognitive ability, reflects someone's performance across a broad range of varying tests. Genes make a substantial difference, but they are not the whole story. They account for about half of all differences in intelligence among people, so half is not caused by genetic differences, which provides strong support for the importance of environmental factors. This estimate of 50 percent reflects the results of twin, adoption and DNA studies. From them, we know, for example, that later in life, children adopted away from their biological parents at birth are just as similar to their biological parents as are children reared by their biological parents. Similarly, we know that adoptive parents and their adopted children do not typically resemble one another in intelligence. © 2016 Scientific American
By Roland Pease BBC Radio Science Unit Researchers have invented a DNA "tape recorder" that can trace the family history of every cell in an organism. The technique is being hailed as a breakthrough in understanding how the trillions of complex cells in a body are descended from a single egg. "It has the potential to provide profound insights into how normal, diseased or damaged tissues are constructed and maintained," one UK biologist told the BBC. The work appears in Science journal. The human body has around 40 trillion cells, each with a highly specialised function. Yet each can trace its history back to the same starting point - a fertilised egg. Developmental biology is the business of unravelling how the genetic code unfolds at each cycle of cell division, how the body plan develops, and how tissues become specialised. But much of what it has revealed has depended on inference rather than a complete cell-by-cell history. "I actually started working on this problem as a graduate student in 2000," confessed Jay Shendure, lead researcher on the new scientific paper. "Could we find a way to record these relationships between cells in some compact form we could later read out in adult organisms?" The project failed then because there was no mechanism to record events in a cell's history. That changed with recent developments in so called CRISPR gene editing, a technique that allows researchers to make much more precise alterations to the DNA in living organisms. The molecular tape recorder developed by Prof Shendure's team at the University of Washington in Seattle, US, is a length of DNA inserted into the genome that contains a series of edit points which can be changed throughout an organism's life. © 2016 BBC.
By BENEDICT CAREY Suzanne Corkin, whose painstaking work with a famous amnesiac known as H.M. helped clarify the biology of memory and its disorders, died on Tuesday in Danvers, Mass. She was 79. Her daughter, Jocelyn Corkin, said the cause was liver cancer. Dr. Corkin met the man who would become a lifelong subject and collaborator in 1964, when she was a graduate student in Montreal at the McGill University laboratory of the neuroscientist Brenda Milner. Henry Molaison — known in published reports as H.M., to protect his privacy — was a modest, middle-aged former motor repairman who had lost the ability to form new memories after having two slivers of his brain removed to treat severe seizures when he was 27. In a series of experiments, Dr. Milner had shown that a part of the brain called the hippocampus was critical to the consolidation of long-term memories. Most scientists had previously thought that memory was not dependent on any one cortical area. Mr. Molaison lived in Hartford, and Dr. Milner had to take the train down to Boston and drive from there to Connecticut to see him. It was a long trip, and transporting him to Montreal proved to be so complicated, largely because of his condition, that Dr. Milner did it just once. Yet rigorous study of H.M., she knew, would require proximity and a devoted facility — with hospital beds — to accommodate extended experiments. The psychology department at the Massachusetts Institute of Technology offered both, and with her mentor’s help, Dr. Corkin landed a position there. Thus began a decades-long collaboration between Dr. Corkin and Mr. Molaison that would extend the work of Dr. Milner, focus intense interest on the hippocampus, and make H.M. the most famous patient in the history of modern brain science. © 2016 The New York Times Company
Keyword: Learning & Memory
Link ID: 22258 - Posted: 05.28.2016
By Jordana Cepelewicz General consensus among Alzheimer’s researchers has it that the disease’s main culprit, a protein called amyloid beta, is an unfortunate waste product that is not known to play any useful role in the body—and one that can have devastating consequences. When not properly cleared from the brain it builds up into plaques that destroy synapses, the junctions between nerve cells, resulting in cognitive decline and memory loss. The protein has thus become a major drug target in the search for a cure to Alzheimer’s. Now a team of researchers at Harvard Medical School and Massachusetts General Hospital are proposing a very different story. In a study published this week in Science Translational Medicine, neurologists Rudolph Tanzi and Robert Moir report evidence that amyloid beta serves a crucial purpose: protecting the brain from invading microbes. “The original idea goes back to 2010 or so when Rob had a few too many Coronas,” Tanzi jokes. Moir had come across surprising similarities between amyloid beta and LL37, a protein that acts as a foot soldier in the brain’s innate immune system, killing potentially harmful bugs and alerting other cells to their presence. “These types of proteins, although small, are very sophisticated in what they do,” Moir says. “And they’re very ancient, going back to the dawn of multicellular life.” © 2016 Scientific American,
By GINA KOLATA Could it be that Alzheimer’s disease stems from the toxic remnants of the brain’s attempt to fight off infection? Provocative new research by a team of investigators at Harvard leads to this startling hypothesis, which could explain the origins of plaque, the mysterious hard little balls that pockmark the brains of people with Alzheimer’s. It is still early days, but Alzheimer’s experts not associated with the work are captivated by the idea that infections, including ones that are too mild to elicit symptoms, may produce a fierce reaction that leaves debris in the brain, causing Alzheimer’s. The idea is surprising, but it makes sense, and the Harvard group’s data, published Wednesday in the journal Science Translational Medicine, supports it. If it holds up, the hypothesis has major implications for preventing and treating this degenerative brain disease. The Harvard researchers report a scenario seemingly out of science fiction. A virus, fungus or bacterium gets into the brain, passing through a membrane — the blood-brain barrier — that becomes leaky as people age. The brain’s defense system rushes in to stop the invader by making a sticky cage out of proteins, called beta amyloid. The microbe, like a fly in a spider web, becomes trapped in the cage and dies. What is left behind is the cage — a plaque that is the hallmark of Alzheimer’s. So far, the group has confirmed this hypothesis in neurons growing in petri dishes as well as in yeast, roundworms, fruit flies and mice. There is much more work to be done to determine if a similar sequence happens in humans, but plans — and funding — are in place to start those studies, involving a multicenter project that will examine human brains. “It’s interesting and provocative,” said Dr. Michael W. Weiner, a radiology professor at the University of California, San Francisco, and a principal investigator of the Alzheimer’s Disease Neuroimaging Initiative, a large national effort to track the progression of the disease and look for biomarkers like blood proteins and brain imaging to signal the disease’s presence. © 2016 The New York Times Company
Sara Reardon Children from impoverished families are more prone to mental illness, and alterations in DNA structure could be to blame, according to a study published on 24 May in Molecular Psychiatry1. Poverty brings with it a number of different stressors, such as poor nutrition, increased prevalence of smoking and the general struggle of trying to get by. All of these can affect a child’s development, particularly in the brain, where the structure of areas involved in response to stress and decision-making have been linked to low socioeconomic status. Poor children are more prone to mental illnesses such as depression than their peers from wealthier families, but they are also more likely to have cognitive problems. Some of these differences are clearly visible in the brain structure and seem to appear at birth, which suggests that prenatal exposure to these stressors can be involved2. But neurodevelopment does not stop at birth. Neuroscientist Ahmad Hariri of Duke University in Durham, North Carolina, suspected that continual exposure to stressors might affect older children as well. He decided to test this idea by studying chemical tags known as methyl groups, which alter DNA structure to regulate how genes are expressed. There is some evidence that methylation patterns can be passed down through generations, but they are also altered by environmental factors, such as smoking. © 2016 Nature Publishing Group,
by Bruce Bower For a landmark 1977 paper, psychologist Andrew Meltzoff stuck his tongue out at 2- to 3-week-old babies. Someone had to do it. After watching Meltzoff razz them for 15 seconds, babies often stuck out their own tongues within the next 2½ minutes. Newborns also tended to respond in kind when the young researcher opened his mouth wide, pushed out his lips like a duck and opened and closed the fingers of one hand. Meltzoff, now at the University of Washington in Seattle, and a colleague were the first to report that babies copy adults’ simple physical deeds within weeks of birth. Until then, most scientists assumed that imitation began at around 9 months of age. Newborns don’t care that imitation is the sincerest form of flattery. For them, it may be a key to interacting with (and figuring out) those large, smiley people who come to be known as mommy and daddy. And that’s job number one for tykes hoping to learn how to talk and hang out with a circle of friends. Meltzoff suspected that babies enter the world able to compare their own movements — even those they can feel but not see, such as a projecting tongue — to corresponding adult actions. Meltzoff’s report has inspired dozens of papers on infant imitation. Some have supported his results, some haven’t. A new report, published May 5 in Current Biology, falls in the latter group. The study of 106 Australian babies tracked from 1 to 9 weeks of age concludes that infants don’t imitate anyone. © Society for Science & the Public 2000 - 201
Keyword: Development of the Brain
Link ID: 22246 - Posted: 05.25.2016