Most Recent Links
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Carl Zimmer An unassuming single-celled organism called Toxoplasma gondii is one of the most successful parasites on Earth, infecting an estimated 11 percent of Americans and perhaps half of all people worldwide. It’s just as prevalent in many other species of mammals and birds. In a recent study in Ohio, scientists found the parasite in three-quarters of the white-tailed deer they studied. One reason for Toxoplasma’s success is its ability to manipulate its hosts. The parasite can influence their behavior, so much so that hosts can put themselves at risk of death. Scientists first discovered this strange mind control in the 1990s, but it’s been hard to figure out how they manage it. Now a new study suggests that Toxoplasma can turn its host’s genes on and off — and it’s possible other parasites use this strategy, too. Toxoplasma manipulates its hosts to complete its life cycle. Although it can infect any mammal or bird, it can reproduce only inside of a cat. The parasites produce cysts that get passed out of the cat with its feces; once in the soil, the cysts infect new hosts. Toxoplasma returns to cats via their prey. But a host like a rat has evolved to avoid cats as much as possible, taking evasive action from the very moment it smells feline odor. Experiments on rats and mice have shown that Toxoplasma alters their response to cat smells. Many infected rodents lose their natural fear of the scent. Some even seem to be attracted to it. Manipulating the behavior of a host is a fairly common strategy among parasites, but it’s hard to fathom how they manage it. A rat’s response to cat odor, for example, emerges from complex networks of neurons that detect an odor, figure out its source and decide on the right response in a given moment. © 2014 The New York Times Company
By JAMIE EDGIN and FABIAN FERNANDEZ LAST week the biologist Richard Dawkins sparked controversy when, in response to a woman’s hypothetical question about whether to carry to term a child with Down syndrome, he wrote on Twitter: “Abort it and try again. It would be immoral to bring it into the world if you have the choice.” In further statements, Mr. Dawkins suggested that his view was rooted in the moral principle of reducing overall suffering whenever possible — in this case, that of individuals born with Down syndrome and their families. But Mr. Dawkins’s argument is flawed. Not because his moral reasoning is wrong, necessarily (that is a question for another day), but because his understanding of the facts is mistaken. Recent research indicates that individuals with Down syndrome can experience more happiness and potential for success than Mr. Dawkins seems to appreciate. There are, of course, many challenges facing families caring for children with Down syndrome, including a high likelihood that their children will face surgery in infancy and Alzheimer’s disease in adulthood. But at the same time, studies have suggested that families of these children show levels of well-being that are often greater than those of families with children with other developmental disabilities, and sometimes equivalent to those of families with nondisabled children. These effects are prevalent enough to have been coined the “Down syndrome advantage.” In 2010, researchers reported that parents of preschoolers with Down syndrome experienced lower levels of stress than parents of preschoolers with autism. In 2007, researchers found that the divorce rate in families with a child with Down syndrome was lower on average than that in families with a child with other congenital abnormalities and in those with a nondisabled child. © 2014 The New York Times Company
Memory can be boosted by using a magnetic field to stimulate part of the brain, a study has shown. The effect lasts at least 24 hours after the stimulation is given, improving the ability of volunteers to remember words linked to photos of faces. Scientists believe the discovery could lead to new treatments for loss of memory function caused by ageing, strokes, head injuries and early Alzheimer's disease. Dr Joel Voss, from Northwestern University in Chicago, said: "We show for the first time that you can specifically change memory functions of the brain in adults without surgery or drugs, which have not proven effective. "This non-invasive stimulation improves the ability to learn new things. It has tremendous potential for treating memory disorders." The scientists focused on associative memory, the ability to learn and remember relationships between unrelated items. An example of associative memory would be linking someone to a particular restaurant where you both once dined. It involves a network of different brain regions working in concert with a key memory structure called the hippocampus, which has been compared to an "orchestra conductor" directing brain activity. Stimulating the hippocampus caused the "musicians" – the brain regions – to "play" more in time, thereby tightening up their performance. A total of 16 volunteers aged 21-40 took part in the study, agreeing to undergo 20 minutes of transcranial magnetic stimulation (TMS) every day for five days. © 2014 Guardian News and Media Limited
Keyword: Learning & Memory
Link ID: 20015 - Posted: 08.30.2014
by Michael Slezak It's odourless, colourless, tasteless and mostly non-reactive – but it may help you forget. Xenon gas has been shown to erase fearful memories in mice, raising the possibility that it could be used to treat post-traumatic stress disorder (PTSD) if the results are replicated in a human trial next year. The method exploits a neurological process known as "reconsolidation". When memories are recalled, they seem to get re-encoded, almost like a new memory. When this process is taking place, the memories become malleable and can be subtly altered. This new research suggests that at least in mice, the reconsolidation process might be partially blocked by xenon, essentially erasing fearful memories. Among other things, xenon is used as an anaesthetic. Frozen in fear Edward Meloni and his colleagues at Harvard Medical School in Boston trained mice to be afraid of a sound by placing them in a cage and giving them an electric shock after the sound was played. Thereafter, if the mice heard the noise, they would become frightened and freeze. Later, the team played the sound and then gave the mice either a low dose of xenon gas for an hour or just exposed them to normal air. Mice that were exposed to xenon froze for less time in response to the sound than the other mice. © Copyright Reed Business Information Ltd.
Keyword: Learning & Memory
Link ID: 20014 - Posted: 08.30.2014
By GARY GREENBERG Joel Gold first observed the Truman Show delusion — in which people believe they are the involuntary subjects of a reality television show whose producers are scripting the vicissitudes of their lives — on Halloween night 2003 at Bellevue Hospital, where he was the chief attending psychiatrist. “Suspicious Minds,” which he wrote with his brother, Ian, an associate professor of philosophy and psychology at McGill University, is an attempt to use this delusion, which has been observed by many clinicians, to pose questions that have gone out of fashion in psychiatry over the last half-century: Why does a mentally ill person have the delusions he or she has? And, following the lead of the medical historian Roy Porter, who once wrote that “every age gets the lunatics it deserves,” what can we learn about ourselves and our times from examining the content of madness? The Golds’ answer is a dual broadside: against a psychiatric profession that has become infatuated with neuroscience as part of its longstanding attempt to establish itself as “real medicine,” and against a culture that has become too networked for its own good. Current psychiatric practice is to treat delusions as the random noise generated by a malfunctioning (and mindless) brain — a strategy that would be more convincing if doctors had a better idea of how the brain produced madness and how to cure it. According to the Golds, ignoring the content of delusions like T.S.D. can only make mentally ill people feel more misunderstood, even as it distracts the rest of us from the true significance of the delusion: that we live in a society that has put us all under surveillance. T.S.D. sufferers may be paranoid, but that does not mean they are wrong to think the whole world is watching. This is not to say they aren’t crazy. Mental illness may be “just a frayed, weakened version of mental health,” but what is in tatters for T.S.D. patients is something crucial to negotiating social life, and that, according to the Golds, is the primary purpose toward which our big brains have evolved: the ability to read other people’s intentions or, as cognitive scientists put it, to have a theory of mind. This capacity is double-edged. “The better you are at ToM,” they write, “the greater your capacity for friendship.” © 2014 The New York Times Company
Link ID: 20013 - Posted: 08.30.2014
By Virginia Morell A dog’s bark may sound like nothing but noise, but it encodes important information. In 2005, scientists showed that people can tell whether a dog is lonely, happy, or aggressive just by listening to his bark. Now, the same group has shown that dogs themselves distinguish between the barks of pooches they’re familiar with and the barks of strangers and respond differently to each. The team tested pet dogs’ reactions to barks by playing back recorded barks of a familiar and unfamiliar dog. The recordings were made in two different settings: when the pooch was alone, and when he was barking at a stranger at his home’s fence. When the test dogs heard a strange dog barking, they stayed closer to and for a longer period of time at their home’s gate than when they heard the bark of a familiar dog. But when they heard an unknown and lonely dog barking, they stayed close to their house and away from the gate, the team reports this month in Applied Animal Behaviour Science. They also moved closer toward their house when they heard a familiar dog’s barks, and they barked more often in response to a strange dog barking. Dogs, the scientists conclude from this first study of pet dogs barking in their natural environment (their owners’ homes), do indeed pay attention to and glean detailed information from their fellows’ barks. © 2014 American Association for the Advancement of Science
One of the best things about being a neuroscientist used to be the aura of mystery around it. It was once so mysterious that some people didn’t even know it was a thing. When I first went to university and people asked what I studied, they thought I was saying I was a “Euroscientist”, which is presumably someone who studies the science of Europe. I’d get weird questions such as “what do you think of Belgium?” and I’d have to admit that, in all honesty, I never think of Belgium. That’s how mysterious neuroscience was, once. Of course, you could say this confusion was due to my dense Welsh accent, or the fact that I only had the confidence to talk to strangers after consuming a fair amount of alcohol, but I prefer to go with the mystery. It’s not like that any more. Neuroscience is “mainstream” now, to the point where the press coverage of it can be studied extensively. When there’s such a thing as Neuromarketing (well, there isn’t actually such a thing, but there’s a whole industry that would claim otherwise), it’s impossible to maintain that neuroscience is “cool” or “edgy”. It’s a bad time for us neurohipsters (which are the same as regular hipsters, except the designer beards are on the frontal lobes rather than the jaw-line). One way that we professional neuroscientists could maintain our superiority was by correcting misconceptions about the brain, but lately even that avenue looks to be closing to us. The recent film Lucy is based on the most classic brain misconception: that we only use 10% of our brain. But it’s had a considerable amount of flack for this already, suggesting that many people are wise to this myth. We also saw the recent release of Susan Greenfield’s new book Mind Change, all about how technology is changing (damaging?) our brains. This is a worryingly evidence-free but very common claim by Greenfield. Depressingly common, as this blog has pointed out many times. But now even the non-neuroscientist reviewers aren’t buying her claims. © 2014 Guardian News and Media Limited
Link ID: 20011 - Posted: 08.30.2014
By PAM BELLUCK Memories and the feelings associated with them are not set in stone. You may have happy memories about your family’s annual ski vacation, but if you see a tragic accident on the slopes, those feelings may change. You might even be afraid to ski that mountain again. Now, using a technique in which light is used to switch neurons on and off, neuroscientists at the Massachusetts Institute of Technology appear to have unlocked some secrets about how the brain attaches emotions to memories and how those emotions can be adjusted. Their research, published Wednesday in the journal Nature, was conducted on mice, not humans, so the findings cannot immediately be translated to the treatment of patients. But experts said the experiments may eventually lead to more effective therapies for people with psychological problems such as depression, anxiety or post-traumatic stress disorder. “Imagine you can go in and find a particular traumatic memory and turn it off or change it somehow,” said David Moorman, an assistant professor of psychological and brain sciences at the University of Massachusetts Amherst, who was not involved in the research. “That’s still science fiction, but with this we’re getting a lot closer to it.” The M.I.T. scientists labeled neurons in the brains of mice with a light-sensitive protein and used pulses of light to switch the cells on and off, a technique called optogenetics. Then they identified patterns of neurons activated when mice created a negative memory or a positive one. A negative memory formed when mice received a mild electric shock to their feet; a positive one was formed when the mice, all male, were allowed to spend time with female mice. © 2014 The New York Times Company
by Penny Sarchet Memory is a fickle beast. A bad experience can turn a once-loved coffee shop or holiday destination into a place to be avoided. Now experiments in mice have shown how such associations can be reversed. When forming a memory of a place, the details of the location and the associated emotions are encoded in different regions of the brain. Memories of the place are formed in the hippocampus, whereas positive or negative associations are encoded in the amygdala. In experiments with mice in 2012, a group led by Susumo Tonegawa of the Massachusetts Institute of Technology managed to trigger the fear part of a memory associated with a location when the animals were in a different location. They used a technique known as optogenetics, which involves genetically engineering mice so that their brains produce a light-sensitive protein in response to a certain cue. In this case, the cue was the formation of the location memory. This meant the team could make the mouse recall the location just by flashing pulses of light down an optical fibre embedded in the skull. The mice were given electric shocks while their memories of the place were was being formed, so that the animals learned to associate that location with pain. Once trained, the mice were put in a new place and a pulse of light was flashed into their brains. This activated the neurons associated with the original location memory and the mice froze, terrified of a shock, demonstrating that the emotion associated with the original location could be induced by reactivating the memory of the place. © Copyright Reed Business Information Ltd.
Learning is easier when it only requires nerve cells to rearrange existing patterns of activity than when the nerve cells have to generate new patterns, a study of monkeys has found. The scientists explored the brain’s capacity to learn through recordings of electrical activity of brain cell networks. The study was partly funded by the National Institutes of Health. “We looked into the brain and may have seen why it’s so hard to think outside the box,” said Aaron Batista, Ph.D., an assistant professor at the University of Pittsburgh and a senior author of the study published in Nature, with Byron Yu, Ph.D., assistant professor at Carnegie Mellon University, Pittsburgh. The human brain contains nearly 86 billion neurons, which communicate through intricate networks of connections. Understanding how they work together during learning can be challenging. Dr. Batista and his colleagues combined two innovative technologies, brain-computer interfaces and machine learning, to study patterns of activity among neurons in monkey brains as the animals learned to use their thoughts to move a computer cursor. “This is a fundamental advance in understanding the neurobiological patterns that underlie the learning process,” said Theresa Cruz, Ph.D., a program official at the National Center for Medical Rehabilitations Research at NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). “The findings may eventually lead to new treatments for stroke as well as other neurological disorders.”
Erin Allday It's well established that chronic pain afflicts people with more than just pain. With the pain come fatigue and sleeplessness, depression and frustration, and a noticeable disinterest in so many of the activities that used to fill a day. It makes sense that chronic pain would leave patients feeling weary and unmotivated - most people wouldn't want to go to work or shop for a week's worth of groceries or even meet friends for dinner when they're exhausted and in pain. But experts in pain and neurology say the connection between chronic pain and a lousy mood may be biochemical, something more complicated than a dour mood brought on from persistent, long-term discomfort alone. Now, a team of Stanford neurologists have found evidence that chronic pain triggers a series of molecular changes in the brain that may sap patients' motivation. "There is an actual physiologic change that happens," said Dr. Neil Schwartz, a post-doctoral scientist who helped lead the Stanford research. "The behavior changes seem quite primary to the pain itself. They're not just a consequence of living with it." Schwartz and his colleagues hope their work could someday lead to new treatments for the behavior changes that come with chronic pain. In the short term, the research improves understanding of the biochemical effects of chronic pain and may be a comfort to patients who blame themselves for their lack of motivation, pain experts said. © 2014 Hearst Communications, Inc.
By ANNA NORTH “You can learn a lot from what you see on a screen,” said Yalda T. Uhls. However, she told Op-Talk, “It’s not going to give you context. It’s not going to give you the big picture.” Ms. Uhls, a researcher at the Children’s Digital Media Center in Los Angeles, was part of a team that looked at what happened when kids were separated from their screens — phones, iPads, laptops and the like — for several days. Their findings may have implications for adults’ relationship to technology, too. For a paper published in the journal Computers in Human Behavior, the researchers studied 51 sixth-graders who attended a five-day camp where no electronic devices were allowed. Before and after the camp, they tested the kids’ emotion-recognition skills using photos of facial expressions and sound-free video clips designed to measure their reading of nonverbal cues. The kids did significantly better on both tests after five screen-free days; a group of sixth-graders from the same school who didn’t go to camp showed less or no improvement. Ms. Uhls, who also works for the nonprofit Common Sense Media, told Op-Talk that a number of factors might have been at play in the campers’ improvement. For instance, their time in nature might have played a role. But to her, the most likely explanation was the sheer increase in face-to-face interaction: “The issue really is not that staring at screens is going to make you bad at recognizing emotions,” she said. “It’s more that if you’re looking at screens you’re not looking at the world, and you’re not looking at people.” Many adults have sought out the same Internet-free experience the kids had, though they usually don’t go to camp to get it. The novelist Neil Gaiman took a “sabbatical from social media” in 2013, “so I can concentrate on my day job: making things up.” © 2014 The New York Times Company
Link ID: 20006 - Posted: 08.28.2014
by Bethany Brookshire Premenstrual syndrome, or PMS, can be a miserable experience. Women report over 200 symptoms in the days before menstruation occurs. The complaints run the gamut from irritable mood to bloating. PMS can be so slight you don’t even notice, or it can be so severe it has its own category — premenstrual dysphoric disorder. But to some, PMS is just a punchline, a joke featured in pop culture from Buffy the Vampire Slayer to Saturday Night Live. Michael Gillings, who studies molecular evolution at Macquarie University in Sydney, thinks that PMS could have a purpose. In a perspective piece published August 11 in Evolutionary Adaptations, Gillings proposes that PMS confers an evolutionary advantage, increasing the likelihood that a woman will leave an infertile mate. He hopes that his idea could lead to more research and less stigma about the condition. But while his hypothesis certainly sparked a lot of discussion, whether it is likely, or even necessary, is in doubt. Gillings first began to think about PMS when he found out that premenstrual dysphoric disorder was being added to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders. “I started to think that we have a normal distribution of PMS responses, where some people don’t get any symptoms, the majority gets mild symptoms, and some get severe symptoms,” he explains. Including PMDD in DSM-5 made a statement, he says, that “we were going to take one end of this normal curve, the extreme far right end, and we were going to draw a line and say, those people there have a disease we’re going to label in our book. But if 80 percent of women get some kind of premenstrual symptoms, then it’s normal. And I wondered, if it’s so normal, what could be the reason for it?” © Society for Science & the Public 2000 - 2014.
|By Roni Jacobson Almost immediately after Albert Hofmann discovered the hallucinogenic properties of LSD in the 1940s, research on psychedelic drugs took off. These consciousness-altering drugs showed promise for treating anxiety, depression, post-traumatic stress disorder (PTSD), obsessive-compulsive disorder (OCD) and addiction, but increasing government conservatism caused a research blackout that lasted decades. Lately, however, there has been a resurgence of interest in psychedelics as possible therapeutic agents. This past spring Swiss researchers published results from the first drug trial involving LSD in more than 40 years. Although the freeze on psychedelic research is thawing, scientists say that restrictive drug policies are continuing to hinder their progress. In the U.S., LSD, psilocybin, MDMA, DMT, peyote, cannabis and ibogaine (a hallucinogen derived from an African shrub) are all classified as Schedule I illegal drugs, which the U.S. Drug Enforcement Administration defines as having a high potential for abuse and no currently accepted medical applications—despite extensive scientific evidence to the contrary. In a joint report released in June, the Drug Policy Alliance and the Multidisciplinary Association for Psychedelic Studies catalogue several ways in which they say that the DEA has unfairly obstructed research on psychedelics, including by overruling an internal recommendation in 1986 that MDMA be placed on a less restrictive schedule. The DEA and the U.S. Food and Drug Administration maintain that there is insufficient research to justify recategorization. This stance creates a catch-22 by basing the decision on the need for more research while limiting the ability of scientists to conduct that research. © 2014 Scientific American
|By Michael Leon I had been working quite happily on the basic biology of the brain when a good friend of mine called for advice about his daughter, who had just been diagnosed with autism. I could hear the anguish and fear in his voice when he asked me whether there was anything that could be done to make her better. I told him about the standard-care therapies, including Intensive Behavioral Intervention, Early Intensive Behavioral Intervention, Applied Behavior Analysis, and the Early Start Denver Model (ESDM). These therapies also are expensive, time-consuming and have variable outcomes, with the best outcomes seen for ESDM. There are, however, few ESDM therapists, and the cost of such intensive therapy can be quite high. Moreover, my friend’s daughter was already past the age of the oldest children in the study that demonstrated the efficacy of ESDM. My feeling was that there was a good chance that there was an effective therapy for her using a simple, inexpensive at-home approach involving daily exposure to a wide variety of sensory stimulation. This is a partial list of the disorders whose symptoms can be greatly reduced, or even completely reversed, with what is known as “environmental enrichment”: Autism Stroke Seizures Brain damage Neuronal death during aging ADHD Prenatal alcohol syndrome Lead exposure Multiple sclerosis Addiction Schizophrenia Memory loss Huntington’s disease Parkinson’s disease Alzheimer’s disease Down syndrome Depression But why haven’t you heard about this? The reason is that all of these disorders that have been successfully treated only in animal models of these neurological problems. However, the effects seen in lab animals can be dramatic. © 2014 Scientific American,
by Tom Siegfried René Descartes was a very clever thinker. He proved his own existence, declaring that because he thought, he must exist: “I think, therefore I am.” But the 17th century philosopher-mathematician-scientist committed a serious mental blunder when he decided that the mind doing the thinking was somehow separate from the brain it lived in. Descartes believed that thought was insubstantial, transmitted from the ether to the pineal gland, which played the role of something like a Wi-Fi receiver embedded deep in the brain. Thereafter mind-brain dualism became the prevailing prejudice. Nowadays, though, everybody with a properly working brain realizes that the mind and brain are coexistent. Thought processes and associated cognitive mental activity all reflect the physics and chemistry of cells and molecules inhabiting the brain’s biological tissue. Many people today do not realize, though, that there’s a modern version of Descartes’ mistaken dichotomy. Just as he erroneously believed the mind was distinct from the brain, some scientists have mistakenly conceived of the brain as distinct from the body. Much of the early research in artificial intelligence, for instance, modeled the brain as a computer, seeking to replicate mental life as information processing, converting inputs to outputs by logical rules. But even if such a machine could duplicate the circuitry of the brain, it would be missing essential peripheral input from an attached body. Actual intelligence requires both body and brain, as the neurologist Antonio Damasio pointed out in his 1994 book, Descartes’ Error. “Mental activity, from its simplest aspects to its most sublime, requires both brain and body proper,” Damasio wrote. © Society for Science & the Public 2000 - 2013.
Link ID: 20002 - Posted: 08.27.2014
By Michael Balter Humans are generally highly cooperative and often impressively altruistic, quicker than any other animal species to help out strangers in need. A new study suggests that our lineage got that way by adopting so-called cooperative breeding: the caring for infants not just by the mother, but also by other members of the family and sometimes even unrelated adults. In addition to helping us get along with others, the advance led to the development of language and complex civilizations, the authors say. Cooperative breeding is not unique to humans. Up to 10% of birds are cooperative breeders, as are meerkats and New World monkeys such as tamarins and marmosets. But our closest primate relatives, great apes such as chimpanzees, are not cooperative breeders. Because the human and chimpanzee lineages split between 5 million and 7 million years ago, and humans are the only apes that engage in cooperative breeding, researchers have puzzled over how this helping behavior might have evolved all over again on the human line. In the late 1990s, Sarah Blaffer Hrdy, now an anthropologist emeritus at the University of California, Davis, proposed the cooperative breeding hypothesis. According to her model, early in their evolution humans added cooperative breeding behaviors to their already existing advanced ape cognition, leading to a powerful combination of smarts and sociality that fueled even bigger brains, the evolution of language, and unprecedented levels of cooperation. Soon after Hrdy’s proposal, anthropologists Carel van Schaik and Judith Burkart of the University of Zurich in Switzerland began to test some of these ideas, demonstrating that cooperatively breeding primates like marmosets engaged in seemingly altruistic behavior by helping other marmosets get food with no immediate reward to themselves. © 2014 American Association for the Advancement of Science.
Daniel Cressey In many respects, the modern electronic cigarette is not so different from its leaf-and-paper predecessor. Take a drag from the mouthpiece and you get a genuine nicotine fix — albeit from a fluid wicked into the chamber of a battery-powered atomizer and vaporized by a heating element. Users exhale a half-convincing cloud of ‘smoke’, and many e-cigarettes even sport an LED at the tip that glows blue, green or classic red to better simulate the experience romanticized by countless writers and film-makers. The only things missing are the dozens of cancer-causing chemicals found in this digital wonder’s analogue forebears. E-cigarettes — also known as personal vaporizers or electronic nicotine-delivery systems among other names — are perhaps the most disruptive devices that public-health researchers working on tobacco control have ever faced. To some, they promise to snuff out a behaviour responsible for around 100 million deaths in the twentieth century. Others fear that they could perpetuate the habit, and undo decades of work. Now, a group once united against a common enemy is divided. “These devices have really polarized the tobacco-control community,” says Michael Siegel, a physician and tobacco researcher at Boston University School of Public Health in Massachusetts. “You now have two completely opposite extremes with almost no common ground between them.” Evidence is in short supply on both sides. Even when studies do appear, they are often furiously debated. And it is not just researchers who are attempting to catch up with the products now pouring out of Chinese factories: conventional tobacco companies are pushing into the nascent industry, and regulators are scrambling to work out what to do. © 2014 Nature Publishing Group
Keyword: Drug Abuse
Link ID: 20000 - Posted: 08.27.2014
|By Roni Jacobson Children are notoriously unreliable witnesses. Conventional wisdom holds that they frequently “remember” things that never happened. Yet a large body of research indicates that adults actually generate more false memories than children. Now a new study finds that children are just as susceptible to false memories as adults, if not more so. Scientists may simply have been using the wrong test. Traditionally, researchers have explored false memories by presenting test subjects with a list of associated words (for instance, “weep,” “sorrow” and “wet”) thematically related to a word not on the list (in this case, “cry”) and then asking them what words they remember. Adults typically mention the missing related word more often than children do—possibly because their life experiences enable them to draw associations between concepts more readily, says Henry Otgaar, a forensic psychologist at Maastricht University in the Netherlands and co-author of the new paper, published in May in the Journal of Experimental Child Psychology. Instead of using word lists to investigate false memories, Otgaar and his colleagues showed participants pictures of scenes, including a classroom, a funeral and a beach. After a short break, they asked those participants whether they remembered seeing certain objects in each picture. Across three experiments, seven- and eight-year-old children consistently reported seeing more objects that were not in the pictures than adults did. © 2014 Scientific American
By Priyanka Pulla Humans are late bloomers when compared with other primates—they spend almost twice as long in childhood and adolescence as chimps, gibbons, or macaques do. But why? One widely accepted but hard-to-test theory is that children’s brains consume so much energy that they divert glucose from the rest of the body, slowing growth. Now, a clever study of glucose uptake and body growth in children confirms this “expensive tissue” hypothesis. Previous studies have shown that our brains guzzle between 44% and 87% of the total energy consumed by our resting bodies during infancy and childhood. Could that be why we take so long to grow up? One way to find out is with more precise studies of brain metabolism throughout childhood, but those studies don’t exist yet. However, a new study published online today in the Proceedings of the National Academy of Sciences (PNAS) spliced together three older data sets to provide a test of this hypothesis. First, the researchers used a 1987 study of PET scans of 36 people between infancy and 30 years of age to estimate age trends in glucose uptake by three major sections of the brain. Then, to calculate how uptake varied for the entire brain, they combined that data with the brain volumes and ages of 400 individuals between 4.5 years of age and adulthood, gathered from a National Institutes of Health study and others. Finally, to link age and brain glucose uptake to body size, they used an age series of brain and body weights of 1000 individuals from birth to adulthood, gathered in 1978. © 2014 American Association for the Advancement of Science.