Chapter 11. Emotions, Aggression, and Stress
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By JAMES GORMAN If an exercise wheel sits in a forest, will mice run on it? Every once in a while, science asks a simple question and gets a straightforward answer. In this case, yes, they will. And not only mice, but also rats, shrews, frogs and slugs. True, the frogs did not exactly run, and the slugs probably ended up on the wheel by accident, but the mice clearly enjoyed it. That, scientists said, means that wheel-running is not a neurotic behavior found only in caged mice. They like the wheel. Two researchers in the Netherlands did an experiment that it seems nobody had tried before. They placed exercise wheels outdoors in a yard garden and in an area of dunes, and monitored the wheels with motion detectors and automatic cameras. They were inspired by questions from animal welfare committees at universities about whether mice were really enjoying wheel-running, an activity used in all sorts of studies, or were instead like bears pacing in a cage, stressed and neurotic. Would they run on a wheel if they were free? Now there is no doubt. Mice came to the wheels like human beings to a health club holding a spring membership sale. They made the wheels spin. They hopped on, hopped off and hopped back on. “When I saw the first mice, I was extremely happy,” said Johanna H. Meijer at Leiden University Medical Center in the Netherlands. “I had to laugh about the results, but at the same time, I take it very seriously. It’s funny, and it’s important at the same time.” Dr. Meijer’s day job is as a “brain electrophysiologist” studying biological rhythms in mice. She relished the chance to get out of the laboratory and study wild animals, and in a way that no one else had. © 2014 The New York Times Company
Dr. Mark Saleh Bell's palsy is a neurological condition frequently seen in emergency rooms and medical offices. Symptoms consist of weakness involving all muscles on one side of the face. About 40,000 cases occur annually in the United States. Men and women are equally affected, and though it can occur at any age, people in their 40s are especially vulnerable. The facial weakness that occurs in Bell's palsy prevents the eye of the affected side from blinking properly and causes the mouth to droop. Because the eyelid doesn't close sufficiently, the eye can dry and become irritated. Bell's palsy symptoms progress fairly rapidly, with weakness usually occurring within three days. If the progression of weakness is more gradual and extends beyond a week, Bell's palsy may not be the problem, and other potential causes should be investigated. Those with certain medical conditions, such as diabetes or pregnancy, are at greater risk of developing Bell's palsy, and those who have had one episode have an 8 percent chance of recurrence. Bell's palsy is thought to occur when the seventh cranial (facial) nerve becomes inflamed. The nerve controls the muscles involved in facial expression and is responsible for other functions, including taste perception, eye tearing and salivation. The cause of the inflammation is unknown, although the herpes simplex virus and autoimmune inflammation are possible causes. © 2014 Hearst Communications, Inc.
Keyword: Movement Disorders
Link ID: 19637 - Posted: 05.20.2014
After a string of scandals involving accusations of misconduct and retracted papers, social psychology is engaged in intense self-examination—and the process is turning out to be painful. This week, a global network of nearly 100 researchers unveiled the results of an effort to replicate 27 well-known studies in the field. In more than half of the cases, the result was a partial or complete failure. As the replicators see it, the failed do-overs are a healthy corrective. “Replication helps us make sure what we think is true really is true,” says Brent Donnellan, a psychologist at Michigan State University in East Lansing who has undertaken three recent replications of studies from other groups—all of which came out negative. “We are moving forward as a science,” he says. But rather than a renaissance, some researchers on the receiving end of this organized replication effort see an inquisition. “I feel like a criminal suspect who has no right to a defense and there is no way to win,” says psychologist Simone Schnall of the University of Cambridge in the United Kingdom, who studies embodied cognition, the idea that the mind is unconsciously shaped by bodily movement and the surrounding environment. Schnall’s 2008 study finding that hand-washing reduced the severity of moral judgment was one of those Donnellan could not replicate. About half of the replications are the work of Many Labs, a network of about 50 psychologists around the world. The results of their first 13 replications, released online in November, were greeted with a collective sigh of relief: Only two failed. Meanwhile, Many Labs participant Brian Nosek, a psychologist at the University of Virginia in Charlottesville, put out a call for proposals for more replication studies. After 40 rolled in, he and Daniël Lakens, a psychologist at Eindhoven University of Technology in the Netherlands, chose another 14 to repeat. © 2014 American Association for the Advancement of Science.
The Presidential Commission for the Study of Bioethical Issues today released its first set of recommendations for integrating ethics into neuroscience research in the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Last July, President Barack Obama charged the commission with identifying key ethical questions that may arise through the BRAIN Initiative and wider neuroscience research. The report is “a dream come true,” says Judy Illes, a neuroethicist at the University of British Columbia in Vancouver, Canada, who was a guest presenter to the commission. Brain research raises unique ethical issues because it “strikes at the very core of who we are,” said political scientist and philosopher Amy Gutmann of the University of Pennsylvania, who chairs the commission, in a call with reporters yesterday. Specific areas of concern identified in the report include questions of brain privacy raised by advances in neuroimaging research; whether research participants and patients with dementia can give informed consent to participate in experimental trials; and research into cognitive enhancement, which raises “issues of distributive justice and fairness,” Gutmann says. Parsing hope from hype is key to ethical neuroscience research and its application, Gutmann notes. Citing the troubled ethical history of psychosurgery in the United States, in which more than 40,000 people were lobotomized based on shaky evidence that the procedure could treat psychiatric illnesses such as schizophrenia and depression, Gutmann cautions that a similar ethical derailment is possible in contemporary neuroscience research. A misstep with invasive experimental treatments such as deep brain stimulation surgery would not only be tragic for patients, but have “devastating consequences” for scientific progress, she says. © 2014 American Association for the Advancement of Science
Bullying casts a long shadow. Children who are bullied are more prone to depression and suicidal tendencies even when they grow up; they're also more likely to get sick and have headaches and stomach troubles, researchers have discovered. A new study may have found the underlying cause: A specific indicator of illness, called C-reactive protein (CRP), is higher than normal in bullying victims, even when they get older. In contrast, the bullies, by the same gauge, seem to be healthier. The researchers focused on CRP because it's a common, easily tested marker of inflammation, the runaway immune system activity that's a feature of many chronic illnesses including cardiovascular disease, diabetes, chronic pain, and depression, explains lead author William Copeland, a psychologist and epidemiologist at Duke University Medical Center in Durham, North Carolina. To link inflammation to bullying, the researchers asked 1420 youngsters between the ages of 9 and 16 whether, and how often, they had been bullied or had bullied others. Interviewers asked participants whether they felt more teased, bullied, or treated meanly by siblings, friends, and peers than other children—and whether they had upset or hurt other people on purpose, tried to get others in trouble, or forced people to do something by threatening or hurting them. The researchers took finger stick blood tests at each assessment. Interviews took place once a year until the participants turned 16, and again when they were 19 and 21. The children interviewed were participants in the larger Great Smoky Mountains Study, in which some 12,000 children in North Carolina were assessed to track the development of psychiatric conditions. In the short term, the effect of bullying on the victims was immediate. CRP levels increased along with the number of reported bullying instances, and more than doubled in those who said they'd been bullied three times or more in the previous year, compared with kids who had never been bullied. No change was seen in bullies, or in kids who hadn't been involved with bullying one way or the other, the researchers report online today in the Proceedings of the National Academy of Sciences. © 2014 American Association for the Advancement of Science.
By DOLLY CHUGH, KATHERINE L. MILKMAN and MODUPE AKINOLA IN the world of higher education, we professors like to believe that we are free from the racial and gender biases that afflict so many other people in society. But is this self-conception accurate? To find out, we conducted an experiment. A few years ago, we sent emails to more than 6,500 randomly selected professors from 259 American universities. Each email was from a (fictional) prospective out-of-town student whom the professor did not know, expressing interest in the professor’s Ph.D. program and seeking guidance. These emails were identical and written in impeccable English, varying only in the name of the student sender. The messages came from students with names like Meredith Roberts, Lamar Washington, Juanita Martinez, Raj Singh and Chang Huang, names that earlier research participants consistently perceived as belonging to either a white, black, Hispanic, Indian or Chinese student. In total, we used 20 different names in 10 different race-gender categories (e.g. white male, Hispanic female). On a Monday morning, the emails went out — one email per professor — and then we waited to see which professors would write back to which students. We understood, of course, that some professors would naturally be unavailable or uninterested in mentoring. But we also knew that the average treatment of any particular type of student should not differ from that of any other — unless professors were deciding (consciously or not) which students to help on the basis of their race and gender. (This “audit” methodology has long been used to study intentional and unintentional bias in real-world decision-making, as it allows researchers to standardize much about the decision environment.) What did we discover? First comes the fairly good news, which we reported in a paper in Psychological Science. Despite not knowing the students, 67 percent of the faculty members responded to the emails, and remarkably, 59 percent of the responders even agreed to meet on the proposed date with a student about whom they knew little and who did not even attend their university. (We immediately wrote back to cancel those meetings.) © 2014 The New York Times Company
By NATALIE ANGIER Of the world’s 43,000 known varieties of spiders, an overwhelming majority are peevish loners: spinning webs, slinging lassos, liquefying prey and attacking trespassers, each spider unto its own. But about 25 arachnid species have swapped the hermit’s hair shirt for a more sociable and cooperative strategy, in which dozens or hundreds of spiders pool their powers to exploit resources that would elude a solo player. And believe it or not, O ye of rolled-up newspaper about to dispatch the poor little Charlotte dangling from your curtain rod for no better reason than your purported “primal fear,” these oddball spider socialites may offer fresh insight into an array of human mysteries: where our personalities come from, why some people can’t open their mouths at a party while others can’t keep theirs shut and, why, no matter our age, we can’t seem to leave high school behind. “It’s very satisfying to me that the most maligned of organisms may have something to tell us about who we are,” said Jonathan N. Pruitt, a biologist at the University of Pittsburgh who studies social spiders. The new work on social spiders is part of the expanding field of animal personality research, which seeks to delineate, quantify and understand the many stylistic differences that have been identified in a vast array of species, including monkeys, minks, bighorn sheep, dumpling squid, zebra finches and spotted hyenas. Animals have been shown to differ, sometimes hugely, on traits like shyness, boldness, aggressiveness and neophobia, or fear of the new. Among the big questions in the field are where those differences come from, and why they exist. Reporting recently in The Proceedings of the Royal Society B, Dr. Pruitt and Kate L. Laskowski, of the Leibniz Institute of Freshwater Ecology and Inland Fisheries in Berlin, have determined that character-building in social spiders is a communal affair. While they quickly display the first glimmerings of a basic predisposition — a relative tendency toward shyness or boldness, tetchiness or docility — that personality is then powerfully influenced by the other spiders in the group. © 2014 The New York Times Company
—By Indre Viskontas and Chris Mooney When the audio of Los Angeles Clippers owner Donald Sterling telling a female friend not to "bring black people" to his team's games hit the internet, the condemnations were immediate. It was clear to all that Sterling was a racist, and the punishment was swift: The NBA banned him for life. It was, you might say, a pretty straightforward case. When you take a look at the emerging science of what motivates people to behave in a racist or prejudiced way, though, matters quickly grow complicated. In fact, if there's one cornerstone finding when it comes to the psychological underpinnings of prejudice, it's that out-and-out or "explicit" racists—like Sterling—are just one part of the story. Perhaps far more common are cases of so-called "implicit" prejudice, where people harbor subconscious biases, of which they may not even be aware, but that come out in controlled psychology experiments. Much of the time, these are not the sort of people whom we would normally think of as racists. "They might say they think it's wrong to be prejudiced," explains New York University neuroscientist David Amodio, an expert on the psychology of intergroup bias. Amodio says that white participants in his studies "might write down on a questionnaire that they are positive in their attitudes towards black people…but when you give them a behavioral measure, of how they respond to pictures of black people, compared with white people, that's when we start to see the effects come out." You can listen to our interview with Amodio on the Inquiring Minds podcast below: Welcome to the world of implicit racial biases, which research suggests are all around us, and which can be very difficult for even the most well-intentioned person to control. ©2014 Mother Jones
by Colin Barras PICTURE the scene: a weak leader is struggling to hold onto power as ambitious upstarts plot to take over. As tensions rise, the community splits and the killing begins. The war will last for years. No, this isn't the storyline of an HBO fantasy drama, but real events involving chimps in Tanzania's Gombe Stream National Park. A look at the social fragmentation that led to a four-year war in the 1970s now reveals similarities between the ways chimpanzee and human societies break down. Jane Goodall has been studying the chimpanzees of Gombe for over 50 years. During the early 1970s the group appeared to split in two, and friendliness was replaced by fighting. So extreme and sustained was the aggression that Goodall dubbed it a war. Joseph Feldblum at Duke University in Durham, North Carolina, and colleagues have re-examined Goodall's field notes from the chimp feeding station she established at Gombe to work out what led to the conflict. In the past, researchers have estimated the strength of social ties based on the amount of time two chimps spent together at the station. But the notes are so detailed that Feldblum could get a better idea of each chimp's social ties, for instance, by considering if the chimps arrived at the same time and from the same direction. His team then plugged this data into software that can describe the chimps' social network. They did this for several periods between 1968 and 1972, revealing when the nature of the network changed. © Copyright Reed Business Information Ltd.
By Maggie Fox Treating psychiatric illnesses with antipsychotic drugs can greatly reduce the risk that a patient will commit a violent crime, researchers reported on Thursday. Their study, published in the Lancet medical journal, adds weight to the argument that severely mentally ill people need to get diagnosed and treated. Mental health experts agree that people with psychiatric illnesses such as schizophrenia are far more likely to become victims of violence than they are to hurt someone else. But Dr. Thomas Insel, director of the National Institute on Mental Health, also notes that people with severe mental illness are up to three times more likely than the general population to be violent. The question has been whether treatment lowers these risks. One high-profile case is that of Jared Loughner, a schizophrenia patient who shot and killed six people in Arizona and wounded several more, including then-congresswoman Gabrielle Giffords. Dr. Seena Fazel of Britain’s Oxford University used a Swedish national database to find out. Sweden keeps careful medical records, and has similar rates of both mental illness and violence to the United States. The only exception is homicide, where the U.S. has much higher rates than just about every other country. Fazel’s team looked at the medical records of everyone born in Sweden between 1961 and 1990. “We identified 40,937 men and 41,710 women who were prescribed any antipsychotic or mood stabilizer between Jan 1, 2006, and Dec 31, 2009,” they wrote. It worked out to about 2 percent of the population.
By Scott Barry Kaufman The latest neuroscience of aesthetics suggests that the experience of visual, musical, and moral beauty all recruit the same part of the “emotional brain”: field A1 of the medial orbitofrontal cortex (mOFC). But what about mathematics? Plato believed that mathematical beauty was the highest form of beauty since it is derived from the intellect alone and is concerned with universal truths. Similarly, the art critic Clive Bell noted: “Art transports us from the world of man’s activity to a world of aesthetic exaltation. For a moment we are shut off from human interests; our anticipations and memories are arrested; we are lifted above the stream of life. The pure mathematician rapt in his studies knows a state of mind which I take to be similar, if not identical. He feels an emotion for his speculations which arises from no perceived relation between them and the lives of men, but springs, inhuman or super-human, from the heart of an abstract science. I wonder, sometimes, whether the appreciators of art and of mathematical solutions are not even more closely allied.” A new study suggests that Bell might be right. Semir Zeki and colleagues recruited 16 mathematicians at the postgraduate or postdoctoral level as well as 12 non-mathematicians. All participants viewed a series of mathematical equations in the fMRI scanner and were asked to rate the beauty of the equations as well as their understanding of each equation. After they were out of the scanner, they filled out a questionnaire in which they reported their level of understanding of each equation as well as their emotional experience viewing the equations. © 2014 Scientific American
Link ID: 19586 - Posted: 05.08.2014
Heidi Ledford Dutch celebrity daredevil Wim Hof has endured lengthy ice-water baths, hiked to the top of Mount Kilimanjaro in shorts and made his mark in Guinness World Records with his ability to withstand cold. Now he has made a mark on science as well. Researchers have used Hof’s methods of mental and physical conditioning to train 12 volunteers to fend off inflammation. The results, published today in the Proceedings of the National Academy of Sciences1, suggest that people can learn to modulate their immune responses — a finding that has raised hopes for patients who have chronic inflammatory disorders such as rheumatoid arthritis and inflammatory bowel disease. The results are only preliminary, warns study first author Matthijs Kox, who investigates immune responses at Radboud University Medical Center in Nijmegen, the Netherlands. Kox says that people with inflammatory disorders sometimes hear about his experiments and call to ask whether the training would enable them to reduce their medication. “We simply do not yet know that,” he says. Still, the work stands out as an illustration of the interactions between the nervous system and the immune system, says Guiseppe Matarese, an immunologist at the University of Salerno in Italy, who was not involved with the study. “This study is a nice way to show that link,” he says. “Orthodox neurobiologists and orthodox immunologists have been sceptical.” They think the study of the interactions between the nervous and immune systems is a “field in the shadows,” he says. © 2014 Nature Publishing Group,
By Christian Jarrett I must have been about seven years old, a junior in my prep school. I was standing in the dining hall surrounded by over a hundred senior boys and schoolmasters, all looking at me, some with pity, others with disdain. It was unheard of for a junior boy to be present in the dining room by the time the seniors had filed in. “What on earth do you think you’re doing Jarrett?” asked the headmaster with mock outrage. I was there because, by refusing to finish my rhubarb crumble, I’d broken a cardinal school rule. All pupils were to eat all they were given. But after vomiting up some of my rhubarb – a flesh-like fruit that still disgusts me to this day – I simply refused to eat on. Keeping me behind in the dining room as the seniors arrived was my punishment. I wanted to explain this to the assembled crowd. Yet speech completely failed me and I began to sob openly and uncontrollably, my humiliation sealed. This was an intense emotional experience for me, and as you can probably tell, the memory remains sore to this day. But is humiliation any more intense than the other negative emotions, such as anger or shame? If it were, how would psychologists and neuroscientists demonstrate that this was the case? You might imagine that the most effective method would be to ask people to rate and describe different emotional experiences – after all, to say that an emotion is intense is really to say something about how it feels, and how it affects you. Yet in a paper published earlier this year, a pair of psychologists – Marte Otten and Kai Jonas – have taken a different approach. Inspired by claims that humiliation is an unusually intense emotion, responsible even for war and strife in the world, the researchers have turned to brain-based evidence. They claim to have provided the “first empirical, neurocognitive evidence for long-standing claims in the humiliation literature that humiliation is a particularly intense emotion.” WIRED.com © 2014 Condé Nast.
Link ID: 19578 - Posted: 05.06.2014
By SAM KEAN UNTIL the past few decades, neuroscientists really had only one way to study the human brain: Wait for strokes or some other disaster to strike people, and if the victims pulled through, determine how their minds worked differently afterward. Depending on what part of the brain suffered, strange things might happen. Parents couldn’t recognize their children. Normal people became pathological liars. Some people lost the ability to speak — but could sing just fine. These incidents have become classic case studies, fodder for innumerable textbooks and bull sessions around the lab. The names of these patients — H. M., Tan, Phineas Gage — are deeply woven into the lore of neuroscience. When recounting these cases today, neuroscientists naturally focus on these patients’ deficits, emphasizing the changes that took place in their thinking and behavior. After all, there’s no better way to learn what some structure in the brain does than to see what happens when it shorts out or otherwise gets destroyed. But these case snippets overlook something crucial about people with brain damage. However glaring their deficits are, their brains still work like ours to a large extent. Most can still read and reason. They can still talk, walk and emote. And they still have the same joys and fears — facts that the psychological caricatures passed down from generation to generation generally omit. The famous amnesiac H. M., for instance, underwent radical brain surgery in 1953 and had most of the hippocampus removed on both sides of his brain; afterward, he seemed to lose the ability to form new long-term memories. Names, dates, directions to the bathroom all escaped him now. He’d eat two breakfasts if no one stopped him. Careful testing, however, revealed that H. M. could form new motor memories — memories of things like how to ride a bicycle — because they rely on different structures in the brain. This work established that memory isn’t a single, monolithic thing, but a collection of different faculties. © 2014 The New York Times Company
by Bethany Brookshire When I was a lab scientist working with mice, I spent hours controlling variables. I stood on precarious chairs to tape tarps over lights to get the light level perfectly right. I made one undergraduate who wore perfume to the lab for animal training wear the same perfume for a whole semester. I was so worried about the mice “recognizing” me over long, overlapping experiments that I did not change the scents of any of my personal care products for nine years. Many of these variables got reported in the methods sections of my papers. “All experiments conducted between 5:00 and 7:00 a.m. Maze dimensions: 4 inches wide, with walls 6 inches tall. Lighting held constant at 10 lux.” All of these variables are reported to allow other people to repeat my experiments, and hopefully get the same result. Now, a new study suggests that maybe I should have included another element in my methods section: “All mice exposed to the scent of a woman.” Jeffrey Mogil’s lab at McGill University in Montreal, Canada, reports April 28 in Nature Methods that mice respond differently to men and women, and that men in fact are a stressful influence. The results show that there’s yet another variable to control when doing sensitive mouse behavioral studies, a variable that could impact fields from pain to depression and beyond. Every department that does animal research has stories about particular experimenters. I recall hearing a story of a lab technician who could get results no one else could, because mice just loved her strawberry-scented hair conditioner. Another colleague told of one experimenter who was so good at handling rats that no one believed her anxiety results. Her rats were just so relaxed. And Mogil’s lab had its own story. In their lab, the presence of human experimenters seemed to stop mice from showing pain. © Society for Science & the Public 2000 - 2013
Jeffrey Mogil’s students suspected there was something fishy going on with their experiments. They were injecting an irritant into the feet of mice to test their pain response, but the rodents didn’t seem to feel anything. “We thought there was something wrong with the injection,” says Mogil, a neuroscientist at McGill University in Montreal, Canada. The real culprit was far more surprising: The mice that didn’t feel pain had been handled by male students. Mogil’s group discovered that this gender distinction alone was enough to throw off their whole experiment—and likely influences the work of other researchers as well. “This is very important work with wide-ranging implications,” says M. Catherine Bushnell, a neuroscientist and the scientific director of the Division of Intramural Research at the National Center for Complementary and Alternative Medicine (NCCAM) in Bethesda, Maryland, who was not involved in the study. “Many people doing research have never thought of this.” Mogil has studied pain for 25 years. He’s long suspected that lab animals respond differently to the sensation when researchers are present. In 2007, his lab observed that mice spend less time licking a painful injection—a sign that they’re hurting—when a person is nearby, even if that “person” is a cardboard cutout of Paris Hilton. Other scientists began to wonder if their own data were biased by the same effect. “There were whisperings at meetings that this was confounding research results,” Mogil says. So he decided to take a closer look. In the new study, Mogil told the researchers in his lab to inject an inflammatory agent into the foot of a rat or mouse and then take a seat nearby and read a book. A video camera trained on the rodent’s face assessed the animal’s pain level, based on a 0- to 2-point “grimace scale” developed by the team. The results were mixed. Sometimes the animals showed pain when an experimenter was present, and sometimes they seemed just fine. So, on a hunch, Mogil and colleagues recrunched the data, this time controlling for whether a male or a female experimenter was present. “We were stunned by the results,” he says. © 2014 American Association for the Advancement of Science.
By JAN HOFFMAN How well can computers interact with humans? Certainly computers play a mean game of chess, which requires strategy and logic, and “Jeopardy!,” in which they must process language to understand the clues read by Alex Trebek (and buzz in with the correct question). But in recent years, scientists have striven for an even more complex goal: programming computers to read human facial expressions. We all know what it’s like to experience pain that makes our faces twist into a grimace. But can you tell if someone else’s face of pain is real or feigned? The practical applications could be profound. Computers could supplement or even replace lie detectors. They could be installed at border crossings and airport security checks. They could serve as diagnostic aids for doctors. Researchers at the University of California, San Diego, have written software that not only detected whether a person’s face revealed genuine or faked pain, but did so far more accurately than human observers. While other scientists have already refined a computer’s ability to identify nuances of smiles and grimaces, this may be the first time a computer has triumphed over humans at reading their own species. “A particular success like this has been elusive,” said Matthew A. Turk, a professor of computer science at the University of California, Santa Barbara. “It’s one of several recent examples of how the field is now producing useful technologies rather than research that only stays in the lab. We’re affecting the real world.” People generally excel at using nonverbal cues, including facial expressions, to deceive others (hence the poker face). They are good at mimicking pain, instinctively knowing how to contort their features to convey physical discomfort. © 2014 The New York Times Company
By LAURENCE STEINBERG I’M not sure whether it’s a badge of honor or a mark of shame, but a paper I published a few years ago is now ranked No. 8 on a list of studies that other psychologists would most like to see replicated. Good news: People find the research interesting. Bad news: They don’t believe it. The paper in question, written with my former student Margo Gardner, appeared in the journal Developmental Psychology in July 2005. It described a study in which we randomly assigned subjects to play a video driving game, either alone or with two same-age friends watching them. The mere presence of peers made teenagers take more risks and crash more often, but no such effect was observed among adults. I find my colleagues’ skepticism surprising. Most people recall that as teenagers, they did far more reckless things when with their friends than when alone. Data from the Federal Bureau of Investigation indicate that many more juvenile crimes than adult crimes are committed in groups. And driving statistics conclusively show that having same-age passengers in the car substantially increases the risk of a teen driver’s crashing but has no similar impact when an adult is behind the wheel. Then again, I’m aware that our study challenged many psychologists’ beliefs about the nature of peer pressure, for it showed that the influence of peers on adolescent risk taking doesn’t rely solely on explicit encouragement to behave recklessly. Our findings also undercut the popular idea that the higher rate of real-world risk taking in adolescent peer groups is a result of reckless teenagers’ being more likely to surround themselves with like-minded others. My colleagues and I have replicated our original study of peer influences on adolescent risk taking several times since 2005. We have also shown that the reason teenagers take more chances when their peers are around is partly because of the impact of peers on the adolescent brain’s sensitivity to rewards. In a study of people playing our driving game, my colleague Jason Chein and I found that when teens were with people their own age, their brains’ reward centers became hyperactivated, which made them more easily aroused by the prospect of a potentially pleasurable experience. This, in turn, inclined teenagers to pay more attention to the possible benefits of a risky choice than to the likely costs, and to make risky decisions rather than play it safe. Peers had no such effect on adults’ reward centers, though. © 2014 The New York Times Company
The negative social, physical and mental health effects of childhood bullying are still evident nearly 40 years later, according to research by British psychiatrists. In the first study of its kind to look at the effects of childhood bullying beyond early adulthood, the researchers said its impact is "persistent and pervasive", with people who were bullied when young more likely to have poorer physical and psychological health and poorer cognitive functioning at age 50. "The effects of bullying are still visible nearly four decades later ... with health, social and economic consequences lasting well into adulthood," said Ryu Takizawa, who led the study at the Institute of Psychiatry at King's College London. The findings, published in the American Journal of Psychiatry on Friday, come from the British National Child Development Study which includes data on all children born in England, Scotland and Wales during one week in 1958. It included 7,771 children whose parents gave information on their child's exposure to bullying when they were aged 7 and 11. The children were then followed up until they reached 50. Bullying is characterized by repeated hurtful actions by children of a similar age, where the victim finds it difficult to defend themselves. More than a quarter of children in the study — 28 per cent — had been bullied occasionally, and 15 per cent were bullied frequently - rates that the researchers said were similar to the situation in Britain today. The study, which adjusted for other factors such as childhood IQ, emotional and behavioural problems and low parental involvement, found people who were frequently bullied in childhood were at an increased risk of mental disorders such as depression, anxiety and experiencing suicidal thoughts. © CBC 2014
By Melissa Healy The nature of psychological resilience has, in recent years, been a subject of enormous interest to researchers, who have wondered how some people endure and even thrive under a certain amount of stress, and others crumble and fall prey to depression. The resulting research has underscored the importance of feeling socially connected and the value of psychotherapy to identify and exercise patterns of thought that protect against hopelessness and defeat. But what does psychological resilience look like inside our brains, at the cellular level? Such knowledge might help bolster peoples' immunity to depression and even treat people under chronic stress. And a new study published Thursday in Science magazine has made some progress in the effort to see the brain struggling with -- and ultimately triumphing over -- stress. A group of neuroscientists at Mount Sinai's Icahn School of Medicine in New York focused on the dopaminergic cells in the brain's ventral tegmentum, a key node in the brain's reward circuitry and therefore an important place to look at how social triumph and defeat play out in the brain. In mice under stress because they were either chronically isolated or rebuffed or attacked by fellow littermates, the group had observed that this group of neurons become overactive. It would logically follow, then, that if you don't want stressed mice (or people) to become depressed, you would want to avoid hyperactivity in that key group of neurons, right? Actually, wrong, the researchers found. In a series of experiments, they saw that the mice who were least prone to behave in socially defeated ways when under stress were actually the ones whose dopaminergic cells in the ventral tegmental area displayed the greatest levels of hyperactivity in response to stress. And that hyperactivity was most pronounced in the neurons that extended from the tegmentum into the nearby nucleus accumbens, also a key node in the brain's reward system.