Chapter 15. Emotions, Aggression, and Stress
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Angus Chen A common pain medication might make you go from "so cute!" to "so what?" when you look at a photo of an adorable kitten. And it might make you less sensitive to horrifying things too. It's acetaminophen, the active ingredient in Tylenol. Researchers say the drug might be taking the edge off emotions – not just pain. "It seems to take off the highs of your daily highs and the lows off your daily lows," says Baldwin Way, a psychologist at Ohio State University and the principal investigator on the study, "It kind of flattens out the vicissitudes of your life." The idea that over-the-counter pain pills might affect emotions has been circulating since 2010, when two psychologists, Naomi Eisenberger and Nathan DeWall, led a study showing that acetaminophen seemed to be having both a psychological and a neurological effect on people. They asked volunteers to play a rigged game that simulated social rejection. Not only did the acetaminophen appear to be deflecting social anxieties, it also seemed to be dimming activity in the insula, a region of the brain involved in processing emotional pain. A brain that can let other thoughts bubble up despite being in pain might help its owner benefit from meditation or other cognitive therapies. "But [the insula] is a portion of the brain that seems to be involved in a lot of things," Way says. In older studies, scientists saw that people with damage in their insula didn't react as strongly to either negative or positive images. So Way and one of his students, Geoffrey Durso, figured that if acetaminophen is doing something to the insula, then it might be having a wider effect, too. © 2015 NPR
By VIRGINIA HEFFERNAN Most newly stylish coinages carry with them some evidence of grammatical trauma. Consider “affluencer,” “selfie,” “impactful.” Notes of cynicism and cutesiness come through. But every now and then a bright exception to this dispiriting routine appears. A rookie word makes its big-league debut, a stadium of pedants prepares to peg it with tomatoes and — nothing. A halfhearted heckle. The new word looks only passably pathetic. Maddeningly, it has heft. “Mindfulness” may be that hefty word now, one that can’t readily be dismissed as trivia or propaganda. Yes, it’s current among jaw-grinding Fortune 500 executives who take sleeping pills and have “leadership coaches,” as well as with the moneyed earnest, who shop at Whole Foods, where Mindful magazine is on the newsstand alongside glossies about woodworking and the environment. It looks like nothing more than the noun form of “mindful” — the proper attitude toward the London subway’s gaps — but “mindfulness” has more exotic origins. In the late 19th century, the heyday of both the British Empire and Victorian Orientalism, a British magistrate in Galle, Ceylon (now Sri Lanka), with the formidable name of Thomas William Rhys Davids, found himself charged with adjudicating Buddhist ecclesiastical disputes. He set out to learn Pali, a Middle Indo-Aryan tongue and the liturgical language of Theravada, an early branch of Buddhism. In 1881, he thus pulled out “mindfulness” — a synonym for “attention” from 1530 — as an approximate translation of the Buddhist concept of sati. The translation was indeed rough. Sati, which Buddhists consider the first of seven factors of enlightenment, means, more nearly, “memory of the present,” which didn’t track in tense-preoccupied English. “Mindfulness” stuck — but may have saddled the subtle sati with false-note connotations of Victorian caution, or even obedience. (“Mind your manners!”) © 2015 The New York Times Company
Link ID: 20797 - Posted: 04.14.2015
By STEVEN QUARTZ and ANETTE ASP THE gaping inequality of America’s first Gilded Age generated strong emotions. It produced social reformers like Jane Addams, anarchist agitators like Emma Goldman, labor leaders like Eugene V. Debs and Progressive politicians like Theodore Roosevelt. By the 1920s, sweeping legislation regulating food and drugs and breaking up corrupt trusts had been passed. The road to the New Deal was paved. But our current Gilded Age has been greeted with relative complacency. Despite soaring inequality, worsened by the Great Recession, and recent grumbling about the 1 percent, Americans remain fairly happy. All of the wage gains since the downturn ended in 2009 have essentially gone to the top 1 percent, yet the proportion of Americans who say they are “thriving” has actually increased. So-called happiness inequality — the proportion of Americans who are either especially miserable or especially joyful — hit a 40-year low in 2010 by some measures. Men have historically been less happy than women, but that gap has disappeared. Whites have historically been happier than nonwhites, but that gap has narrowed, too. In fact, American happiness has not only stayed steady, but converged, since wages began stagnating in the mid-1970s. This is puzzling. It does not conform with economic theories that compare happiness to envy, and emphasize the impact of relative income for happiness — how we compare with the Joneses. A new neuroscience of consumer behavior reinforces our argument. In one experiment, we used functional magnetic resonance imaging (fMRI) to understand our brains’ reaction to perceived coolness. We selected students from the Art Center College of Design in Pasadena, Calif., and asked them to rate, from uncool to cool, hundreds of images from the following categories: bottled water, shoes, perfumes, handbags, watches, cars, chairs, personal electronics and sunglasses. We also included images of celebrities (actors and musicians). The cooler objects typically weren’t the more expensive ones: our subjects rated a Kia hatchback above a Buick sedan, for example. © 2015 The New York Times Company
Link ID: 20790 - Posted: 04.13.2015
By MALIA WOLLAN “A polygraph is nothing more than a psychological billy club used to coerce and intimidate people,” says Doug Williams, a former Oklahoma City police detective and polygraph examiner who for 36 years has trained people to pass the lie-detector test. The first step is not to be intimidated. Most tests include two types of questions: relevant ones about a specific incident (“Did you leak classified information to The New York Times?”) and broader so-called control questions (“Have you ever lied to anyone who trusted you?”). The test assumes that an innocent person telling the truth will have a stronger reaction to the control questions than to the relevant ones. Before your test, practice deciphering between the two question types. “Go to the beach” when you hear a relevant question, Williams says. Calm yourself before answering by imagining gentle waves and warm sand. When you get a control question, which is more general, envision the scariest thing you can in order to trigger physiological distress; the polygraph’s tubes around your chest measure breathing, the arm cuff monitors heart rate and electrodes attached to you fingertips detect perspiration. What is your greatest fear? Falling? Drowning? Being buried alive? “Picture that,” Williams says. He used to advise trainees to clench their anus but has since concluded that terrifying mental imagery works better. Williams, who is 69, may be among the more vitriolic critics of polygraphs, which he refers to as “insidious Orwellian instruments of torture,” but their reliability has long been questioned elsewhere, too. Federal legislation prohibits most private employers from using polygraphs. The U.S. Supreme Court has ruled that lower courts can ban them as evidence, and the scientific community has repeatedly raised concerns about their ability to accurately detect lies. © 2015 The New York Times Company
Link ID: 20789 - Posted: 04.13.2015
Robin McKie, science editor A smile is the universal welcome, the writer Max Eastman once remarked. But how sure can we be that a person’s smile is genuine? The answer is the empathy test, created by psychologist Richard Wiseman, which probes our ability to appreciate the feelings of others – from their appearance. A photographer asks a subject to imagine meeting an individual they don’t like and to put on a fake smile. Later the subject sits with a real friend and as they converse, the photographer records their genuine smile. Thus two versions of their smile are recorded. The question is: how easy is it to tell the difference? “If you lack empathy, you are very bad at differentiating between the two photographs,” says Wiseman, who teaches at the University of Hertfordshire. But how do professions differ in their ability to spot a fake? And in particular, how do scientists and journalists score? Neither are particularly renowned for their empathy, after all. Last month’s Scientists Meet the Media party, for which the Observer is the media sponsor, gave Wiseman a perfect opportunity to compare the two professions. At the party, hosted by the Science Museum in London, some of Britain’s top researchers mingled with UK science journalists. About 150 guests were shown photographs of subjects with fake and genuine smiles. Guests were then asked to spot the false and the true. The results were intriguing. © 2015 Guardian News and Media Limited
Link ID: 20785 - Posted: 04.11.2015
By Emily Underwood A splashy headline appeared on the websites of many U.K. newspapers this morning, claiming that men whose brothers or fathers have been convicted of a sex offense are “five times more likely to commit sex crimes than the average male” and that this increased risk of committing rape or molesting a child “may run in a family’s male genes.” The study, published online today in the International Journal of Epidemiology, analyzed data from 21,566 male sex offenders convicted in Sweden between 1973 and 2009 and concluded that genetics may account for at least 40% of the likelihood of committing a sex crime. (Women, who commit less than 1% of Sweden’s sexual offenses, were omitted from the analysis.) The scientists have suggested that the new research could be used to help identify potential offenders and target high-risk families for early intervention efforts. But independent experts—and even the researchers who led the work, to a certain degree—warn that the study has some serious limitations. Here are a few reasons to take its conclusions, and the headlines, with a generous dash of salt. Alternate explanations: Most studies point to early life experiences, such as childhood abuse, as the most important risk factor for becoming a perpetrator of abuse in adulthood. The new study, however, did not include any detail about the convicted sex criminals’ early life exposure to abuse. Instead, by comparing fathers with sons, and full brothers and half-brothers reared together or apart, the scientists attempted to tease out the relative contributions of shared environment and shared genes to the risk of sexual offending. Based on their analyses, the researchers concluded that shared environment accounted for just 2% of the risk of sexual offense, while genetics accounted for roughly 40%. Although there is likely some genetic contribution to sexual offending—perhaps related to impulsivity or sex drive—the group “may be overestimating the role of genes” because their assumptions were inaccurate, says Fred Berlin, a psychiatrist and sexologist at Johns Hopkins University in Baltimore, Maryland. © 2015 American Association for the Advancement of Science.
By ERICA GOODE He was described, in the immediate aftermath of the Germanwings crash, as a cheerful and careful pilot, a young man who had dreamed of flying since boyhood. But in the days since, it has seemed increasingly clear that Andreas Lubitz, 27, the plane’s co-pilot, was something far more sinister: the perpetrator of one of the worst mass murder-suicides in history. If what researchers have learned about such crimes is any indication, this notoriety may have been just what Mr. Lubitz wanted. The actions now attributed to Mr. Lubitz — taking 149 unsuspecting people with him to a horrifying death — seem in some ways unfathomable, and his full motives may never be fully understood. But studies over the last decades have begun to piece together characteristics that many who carry out such violence seem to share, among them a towering narcissism, a strong sense of grievance and a desire for infamy. Adam Lankford, an associate professor of criminal justice at the University of Alabama, said that in his research on mass killers who also took their own lives, he has found “a significant number of cases where they mention a desire for fame, glory or attention as a motive.” Before Adam Lanza, 20, the Sandy Hook Elementary School shooter, killed 20 children, six adults and himself in 2012, he wrote in an online forum, “Just look at how many fans you can find for all different types of mass murderers.” Robert Hawkins, 19, who committed suicide after killing eight people at a shopping mall in Omaha in 2007, left a note saying “I’m gonna be famous,” punctuating the sentence with an expletive. And Dylan Klebold, 17, of Columbine High School fame, bragged that the goal was to cause “the most deaths in U.S. history…we’re hoping. We’re hoping.” “Directors will be fighting over this story,” Mr. Klebold said in a video made before the massacre. © 2015 The New York Times Company
By Virginia Morell Rats and mice in pain make facial expressions similar to those in humans—so similar, in fact, that a few years ago researchers developed rodent “grimace scales,” which help them assess an animal’s level of pain simply by looking at its face. But scientists have questioned whether these expressions convey anything to other rodents, or if they are simply physiological reactions devoid of meaning. Now, researchers report that other rats do pay attention to the emotional expressions of their fellows, leaving an area when they see a rat that’s suffering. “It’s a finding we thought might be true, and are glad that someone figured out how to do an experiment that shows it,” says Jeffrey Mogil, a neuroscientist at McGill University in Montreal, Canada. Mogil’s lab developed pain grimace scales for rats and mice in 2006, and it discovered that mice experience pain when they see a familiar mouse suffering—a psychological phenomenon known as emotional contagion. According to Mogil, a rodent in pain expresses its anguish through narrowed eyes, flattened ears, and a swollen nose and cheeks. Because people can read these visual cues and gauge the intensity of the animal’s pain, Mogil has long thought that other rats could do so as well. In Japan, Satoshi Nakashima, a social cognition psychologist at NTT Communication Science Laboratories in Kanagawa, thought the same thing. And, knowing that other scientists had recently shown that mice can tell the difference between paintings by Picasso and Renoir, he decided to see if rodents could also discriminate between photographs of their fellows’ expressions. He designed the current experiments as part of his doctoral research. © 2015 American Association for the Advancement of Science
Mo Costandi During the 1960s, neuroscientists Ronald Melzack and Patrick Wall proposed an influential new theory of pain. At the time, researchers were struggling to explain the phenomenon. Some believed that specific nerve fibres carry pain signals up into the brain, while others argued that the pain signals are transmitted by intense firing of non-specific fibres. Neither idea was entirely satisfactory, because they could not explain why spinal surgery often fails to abolish pain, why gentle touch and other innocuous stimuli can sometimes cause excruciating pain, or why intensely painful stimuli are not always experienced as such. Melzack and Wall’s Gate Control Theory stated that inhibitory neurons in the spinal cord control the relay of pain signals into the brain. Despite having some holes in it, the theory provided a revolutionary new framework for understanding the neural basis of pain, and ushered in the modern era of pain research. Now, almost exactly 50 years after the publication of Melzack and Wall’s theory, European researchers provide direct evidence of gatekeeper cells that control the flow of pain and itch signals from the spinal cord to the brain. The experience that we call “pain” is an extremely complex one that often involves emotional aspects. Researchers therefore distinguish it from nociception, the process by which the nervous system detects noxious stimuli. Nociception is mediated by primary sensory neurons, whose cell bodies are clumped together in the dorsal root ganglia that run alongside the spinal cord. Each has a single fibre that splits in two not far from the cell body, sending one branch out to the skin surface and the other into the spinal cord. © 2015 Guardian News and Media Limited
By JENEEN INTERLANDI Nyiregyhaza (pronounced NEAR-re-cha-za) is a medium-size city tucked into the northeastern corner of Hungary, about 60 miles from the Ukrainian border. It has a world-class zoo, several museums and universities and a new Lego Factory. It also has two Roma settlements, or “Gypsy ghettos.” The larger of these settlements is Gusev, a crumbling 19th-century military barracks separated from the city proper by a railway station and a partly defunct industrial zone. Gusev is home to more than 1,000 Roma. Its chief amenities include a small grocery store and a playground equipped with a lone seesaw and a swingless swing set. There’s also a freshly painted elementary school, where approximately 60 students are currently enrolled. Almost all those students are Roma and almost all of them live in Gusev. Officially, most of the schools in Nyiregyhaza are integrated. Roma students have access to the same facilities as non-Roma students, and the ethnic balance of any given facility largely reflects the ethnic balance of the neighborhoods it serves. In practice, things are muddier. While many families in Gusev have been assigned to perfectly reputable schools, there is no busing program, and most schools are not within walking distance. For families living on just 60,000 forints ($205) a month, the schools are also too expensive to reach by public transit. “Everything is fine on paper,” Adel Kegye, an attorney with the Chance for Children Foundation (C.F.C.F.), told me when I visited Hungary this past fall. “But in reality, they make it very hard for the Roma to go anywhere but the settlement school.” ..... In the past two decades, with the advent of f.M.R.I. technology, neuroscientists also began to tackle such questions. Emile Bruneau, a cognitive neuroscientist at the Massachusetts Institute of Technology, has spent the past seven years studying intractable conflicts around the world. © 2015 The New York Times Company
Christian Jarrett November 2013, I proudly launched the Brain Watch blog here at WIRED. This will be my final post. For seventeen months I’ve used the blog to report on new neuroscience findings, to reflect on how neuroscience is influencing the public and media, to investigate the claims of brain products, to explore neurological abnormality and death, and to debunk misconceptions about the brain. I loved reading your comments and I was thrilled when I found my ideas from here quoted in other publications. It’s been a lot of fun. Here’s some of what I learned: Brain myths die hard When the movie Lucy came out last year, it provided me an opportunity to challenge the 10% brain myth and explore its origins (the idea we only use 10% of our brains is a premise of the film). With such tired myths, it’s easy to wonder if anybody believes them anymore. Writing this blog, I learned not to underestimate their staying power! Consider the vitriol my 10% post attracted from a neuroscience grad student at Yale. In an email dripping with disdain she told me “You … should feel ashamed for releasing such a misinformed article. … There are misinformed and uneducated people all over the internet trying to disprove this 10% notion, but that is expected. This is certainly NOT something I expected from someone allegedly as well educated as yourself.” Brain science is confusing and complicated Hardly a revelation, you might say. But writing this blog brought home to me the messy reality of neuroscience. Consider how tabloid papers like dividing the world into those activities and technologies that cause brain shrinkage and those that cause brain growth – the implicit assumption always being that growth is good and shrinkage bad.
Link ID: 20706 - Posted: 03.21.2015
Jon Hamilton Since his birth 33 years ago, Jonathan Keleher has been living without a cerebellum, a structure that usually contains about half the brain's neurons. This exceedingly rare condition has left Jonathan with a distinctive way of speaking and a walk that is slightly awkward. He also lacks the balance to ride a bicycle. But all that hasn't kept him from living on his own, holding down an office job and charming pretty much every person he meets. "I've always been more into people than anything else," Jonathan tells me when I meet him at his parents' house in Concord, Mass., a suburb of Boston. "Why read a book or why do anything when you can be social and talk to people?" Jonathan is also making an important contribution to neuroscience. By allowing scientists to study him and his brain, he is helping to change some long-held misconceptions about what the cerebellum does. And that, in turn, could help the hundreds of thousands of people whose cerebellums have been damaged by a stroke, infection or disease. For decades, the cerebellum has been the "Rodney Dangerfield of the brain," says Dr. Jeremy Schmahmann, a professor of neurology at Harvard and Massachusetts General Hospital. It gets no respect because most scientists only know about its role in balance and fine motor control. © 2015 NPR
Brian Owens Our choice between two moral options might be swayed by tracking our gaze, and asking for a decision at the right moment. People asked to choose between two written moral statements tend to glance more often towards the option they favour, experimental psychologists say. More surprisingly, the scientists also claim it’s possible to influence a moral choice: asking for an immediate decision as soon as someone happens to gaze at one statement primes them to choose that option. It’s well known that people tend to look more towards the option they are going to choose when they are choosing food from a menu, says Philip Pärnamets, a cognitive scientist from Lund University in Sweden. He wanted to see if that applied to moral reasoning as well. “Moral decisions have long been considered separately from general decision-making,” he says. “I wanted to integrate them.” In a paper published today in the Proceedings of the National Academy of Sciences1, Pärnamets and his colleagues explain how they presented volunteers with a series of moral statements, such as 'murder is sometimes justified,' 'masturbating with the aid of a willing animal is acceptable' and 'paying taxes is a good thing.' Then the psychologists tracked the volunteers’ gaze as two options appeared on a screen. Once the tracker had determined that a person had spent at least 750 milliseconds looking at one answer and 250 milliseconds at the other, the screen changed to prompt them to make a decision. Almost 60% of the time, they chose the most viewed option — indicating, says Pärnamets, that eye gaze tracks an unfolding moral decision. © 2015 Nature Publishing Group,
When it comes to fight or flight for brawling crickets, a chemical in the brain is in charge. Being roughed up in a skirmish can trigger nerve cells in Mediterranean field crickets (Gryllus bimaculatus) to release nitric oxide, making the losing cricket run away, scientists report online March 13 in Science Advances. Watch in this video as two crickets face off. When the loser hits its limit, it flees the fight. In a second bout, the loser then tries to avoid the winner. Nitric oxide prompts this continued submissive behavior, which lasts several hours before a cricket’s will to fight returns. “If you block nitric oxide they recover quickly, and if you give them nitric oxide they don’t,” says Paul Stevenson, a coauthor of the new research and behavioral neurobiologist at Leipzig University in Germany. “It’s a very simple algorithm for controlling a very complicated social situation.” P. Stevenson and J. Rillich. Adding up the odds—Nitric oxide signaling underlies the decision to flee and post-conflict depression of aggression. Science Advances. Published online March 13, 2015.doi: 10.1126/sciadv.1500060. © Society for Science & the Public 2000 - 2015.
Link ID: 20686 - Posted: 03.14.2015
By Nicholas Bakalar People sometimes take Valium or Ativan to relieve anxiety before surgery, but a new study suggests that these benzodiazepine drugs have little beneficial effect and may even delay recovery. Researchers studied 1,062 patients admitted to French hospitals for surgery requiring general anesthesia. A third took 2.5 milligrams of lorazepam (brand name Ativan), a third received a placebo, and a third were given no premedication. Patients completed questionnaires assessing anxiety, pain levels and quality of sleep before and a day after their operations, while researchers recorded their time to having ventilation tubes removed and to recovering full wakefulness. The study was published in JAMA. Lorazepam was associated with more postsurgery amnesia and a longer time to recover cognitive abilities. Quality of sleep was impaired in the lorazepam group, but not in the others. And ventilation tubes were kept in significantly longer in the lorazepam group. Pain scores did not differ between the lorazepam and the no-medication groups, but there was more pain in the group given the placebo. The lead author, Dr. Axel Maurice-Szamburski, an anesthesiologist at Timone Hospital in Marseille, cited recent surveys showing that benzodiazepines are widely prescribed before surgery. “But until now,” he added, “sedatives have not been evaluated from the patient’s point of view. It’s the patient who should be happy, not the doctor.” © 2015 The New York Times Company
Link ID: 20676 - Posted: 03.10.2015
By TIMOTHY WILLIAMS In January 1972, Cecil Clayton was cutting wood at his family’s sawmill in southeastern Missouri when a piece of lumber flew off the circular saw blade and struck him in the forehead. The impact caved in part of Mr. Clayton’s skull, driving bone fragments into his brain. Doctors saved his life, but in doing so had to remove 20 percent of his frontal lobe, which psychiatrists say led Mr. Clayton to be tormented for years by violent impulses, schizophrenia and extreme paranoia. In 1996, his lawyers say, those impulses drove Mr. Clayton to kill a law enforcement officer. Today, as Mr. Clayton, 74, sits on death row, his lawyers have returned to that 1972 sawmill accident in a last-ditch effort to save his life, arguing that Missouri’s death penalty law prohibits the execution of severely brain-damaged people. Lawyers for Mr. Clayton, who has an I.Q. of 71, say he should be spared because his injury has made it impossible for him to grasp the significance of his death sentence, scheduled for March 17. “There was a profound change in him that he doesn’t understand, and neither did his family,” said Elizabeth Unger Carlyle, one of Mr. Clayton’s lawyers. While several rulings by the United States Supreme Court in recent years have narrowed the criteria for executing people who have a mental illness, states continue to hold wide sway in establishing who is mentally ill. The debate surrounding Mr. Clayton involves just how profoundly his impairment has affected his ability to understand what is happening to him. Mr. Clayton is missing about 7.7 percent of his brain. © 2015 The New York Times Company
By RICHARD A. FRIEDMAN CHANCES are that everyone on this planet has experienced anxiety, that distinct sense of unease and foreboding. Most of us probably assume that anxiety always has a psychological trigger. Yet clinicians have long known that there are plenty of people who experience anxiety in the absence of any danger or stress and haven’t a clue why they feel distressed. Despite years of psychotherapy, many experience little or no relief. It’s as if they suffer from a mental state that has no psychological origin or meaning, a notion that would seem heretical to many therapists, particularly psychoanalysts. Recent neuroscience research explains why, in part, this may be the case. For the first time, scientists have demonstrated that a genetic variation in the brain makes some people inherently less anxious, and more able to forget fearful and unpleasant experiences. This lucky genetic mutation produces higher levels of anandamide — the so-called bliss molecule and our own natural marijuana — in our brains. In short, some people are prone to be less anxious simply because they won the genetic sweepstakes and randomly got a genetic mutation that has nothing at all to do with strength of character. About 20 percent of adult Americans have this mutation. Those who do may also be less likely to become addicted to marijuana and, possibly, other drugs — presumably because they don’t need the calming effects that marijuana provides. One patient of mine, a man in his late 40s, came to see me because he was depressed and lethargic. He told me at our first meeting that he had been using cannabis almost daily for at least the past 15 years. “It became a way of life,” he explained. “Things are more interesting, and I can tolerate disappointments without getting too upset.” © 2015 The New York Times Company
By Will Boggs MD NEW YORK (Reuters Health) - Adolescents with a history of childhood trauma show different neural responses to subjective anxiety and craving, researchers report. "I think the finding of increased activation of insula, anterior cingulate, and prefrontal cortex in response to stress cues in the high- relative to low-trauma group, while arguably not necessarily unexpected, is important as it suggests that youth exposed to higher levels of trauma may experience different brain responses to similar stressors," Dr. Marc N. Potenza from Yale University, New Haven, Connecticut told Reuters Health by email. Childhood trauma has been associated with anxiety and depression, as well as obesity, risky sexual behavior, and substance use. Previous imaging studies have not investigated neural responses to personalized stimuli, Dr. Potenza and his colleagues write in Neuropsychopharmacology, online January 8. The team used functional MRI to assess regional brain activations to personalized appetitive (favorite food), aversive (stress), and neutral/relaxing cues in 64 adolescents, including 33 in the low-trauma group and 31 in the high-trauma group. Two-thirds of the adolescents had been exposed to cocaine prenatally, with prenatal cocaine exposure being significantly over-represented in the high-trauma group. Compared with the low-trauma group, the high-trauma group showed increased responsivity in several cortical regions in response to stress, as well as decreased activation in the cerebellar vermis and right cerebellum in response to neutral/relaxing cues. But the two groups did not differ significantly in their responses to favorite-food cues, the researchers found. © 2015 Scientific American
By Nicholas Bakalar Gout, a form of arthritis, is extremely painful and associated with an increased risk for cardiovascular problems. But there is a bright side: It may be linked to a reduced risk for Alzheimer’s disease. Researchers compared 59,204 British men and women with gout to 238,805 without the ailment, with an average age of 65. Patients were matched for sex, B.M.I., smoking, alcohol consumption and other characteristics. The study, in The Annals of the Rheumatic Diseases, followed the patients for five years. They found 309 cases of Alzheimer’s among those with gout and 1,942 among those without. Those with gout, whether they were being treated for the condition or not, had a 24 percent lower risk of Alzheimer’s disease. The reason for the connection is unclear. But gout is caused by excessive levels of uric acid in the blood, and previous studies have suggested that uric acid protects against oxidative stress. This may play a role in limiting neuron degeneration. “This is a dilemma, because uric acid is thought to be bad, associated with heart disease and stroke,” said the senior author, Dr. Hyon K. Choi, a professor of medicine at Harvard. “This is the first piece of data suggesting that uric acid isn’t all bad. Maybe there is some benefit. It has to be confirmed in randomized trials, but that’s the interesting twist in this story.” © 2015 The New York Times Company
|By Charles Schmidt The notion that the state of our gut governs our state of mind dates back more than 100 years. Many 19th- and early 20th-century scientists believed that accumulating wastes in the colon triggered a state of “auto-intoxication,” whereby poisons emanating from the gut produced infections that were in turn linked with depression, anxiety and psychosis. Patients were treated with colonic purges and even bowel surgeries until these practices were dismissed as quackery. The ongoing exploration of the human microbiome promises to bring the link between the gut and the brain into clearer focus. Scientists are increasingly convinced that the vast assemblage of microfauna in our intestines may have a major impact on our state of mind. The gut-brain axis seems to be bidirectional—the brain acts on gastrointestinal and immune functions that help to shape the gut's microbial makeup, and gut microbes make neuroactive compounds, including neurotransmitters and metabolites that also act on the brain. These interactions could occur in various ways: microbial compounds communicate via the vagus nerve, which connects the brain and the digestive tract, and microbially derived metabolites interact with the immune system, which maintains its own communication with the brain. Sven Pettersson, a microbiologist at the Karolinska Institute in Stockholm, has recently shown that gut microbes help to control leakage through both the intestinal lining and the blood-brain barrier, which ordinarily protects the brain from potentially harmful agents. Microbes may have their own evolutionary reasons for communicating with the brain. They need us to be social, says John Cryan, a neuroscientist at University College Cork in Ireland, so that they can spread through the human population. © 2015 Scientific American