Most Recent Links
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By GARY GREENBERG Joel Gold first observed the Truman Show delusion — in which people believe they are the involuntary subjects of a reality television show whose producers are scripting the vicissitudes of their lives — on Halloween night 2003 at Bellevue Hospital, where he was the chief attending psychiatrist. “Suspicious Minds,” which he wrote with his brother, Ian, an associate professor of philosophy and psychology at McGill University, is an attempt to use this delusion, which has been observed by many clinicians, to pose questions that have gone out of fashion in psychiatry over the last half-century: Why does a mentally ill person have the delusions he or she has? And, following the lead of the medical historian Roy Porter, who once wrote that “every age gets the lunatics it deserves,” what can we learn about ourselves and our times from examining the content of madness? The Golds’ answer is a dual broadside: against a psychiatric profession that has become infatuated with neuroscience as part of its longstanding attempt to establish itself as “real medicine,” and against a culture that has become too networked for its own good. Current psychiatric practice is to treat delusions as the random noise generated by a malfunctioning (and mindless) brain — a strategy that would be more convincing if doctors had a better idea of how the brain produced madness and how to cure it. According to the Golds, ignoring the content of delusions like T.S.D. can only make mentally ill people feel more misunderstood, even as it distracts the rest of us from the true significance of the delusion: that we live in a society that has put us all under surveillance. T.S.D. sufferers may be paranoid, but that does not mean they are wrong to think the whole world is watching. This is not to say they aren’t crazy. Mental illness may be “just a frayed, weakened version of mental health,” but what is in tatters for T.S.D. patients is something crucial to negotiating social life, and that, according to the Golds, is the primary purpose toward which our big brains have evolved: the ability to read other people’s intentions or, as cognitive scientists put it, to have a theory of mind. This capacity is double-edged. “The better you are at ToM,” they write, “the greater your capacity for friendship.” © 2014 The New York Times Company
Link ID: 20013 - Posted: 08.30.2014
By Virginia Morell A dog’s bark may sound like nothing but noise, but it encodes important information. In 2005, scientists showed that people can tell whether a dog is lonely, happy, or aggressive just by listening to his bark. Now, the same group has shown that dogs themselves distinguish between the barks of pooches they’re familiar with and the barks of strangers and respond differently to each. The team tested pet dogs’ reactions to barks by playing back recorded barks of a familiar and unfamiliar dog. The recordings were made in two different settings: when the pooch was alone, and when he was barking at a stranger at his home’s fence. When the test dogs heard a strange dog barking, they stayed closer to and for a longer period of time at their home’s gate than when they heard the bark of a familiar dog. But when they heard an unknown and lonely dog barking, they stayed close to their house and away from the gate, the team reports this month in Applied Animal Behaviour Science. They also moved closer toward their house when they heard a familiar dog’s barks, and they barked more often in response to a strange dog barking. Dogs, the scientists conclude from this first study of pet dogs barking in their natural environment (their owners’ homes), do indeed pay attention to and glean detailed information from their fellows’ barks. © 2014 American Association for the Advancement of Science
One of the best things about being a neuroscientist used to be the aura of mystery around it. It was once so mysterious that some people didn’t even know it was a thing. When I first went to university and people asked what I studied, they thought I was saying I was a “Euroscientist”, which is presumably someone who studies the science of Europe. I’d get weird questions such as “what do you think of Belgium?” and I’d have to admit that, in all honesty, I never think of Belgium. That’s how mysterious neuroscience was, once. Of course, you could say this confusion was due to my dense Welsh accent, or the fact that I only had the confidence to talk to strangers after consuming a fair amount of alcohol, but I prefer to go with the mystery. It’s not like that any more. Neuroscience is “mainstream” now, to the point where the press coverage of it can be studied extensively. When there’s such a thing as Neuromarketing (well, there isn’t actually such a thing, but there’s a whole industry that would claim otherwise), it’s impossible to maintain that neuroscience is “cool” or “edgy”. It’s a bad time for us neurohipsters (which are the same as regular hipsters, except the designer beards are on the frontal lobes rather than the jaw-line). One way that we professional neuroscientists could maintain our superiority was by correcting misconceptions about the brain, but lately even that avenue looks to be closing to us. The recent film Lucy is based on the most classic brain misconception: that we only use 10% of our brain. But it’s had a considerable amount of flack for this already, suggesting that many people are wise to this myth. We also saw the recent release of Susan Greenfield’s new book Mind Change, all about how technology is changing (damaging?) our brains. This is a worryingly evidence-free but very common claim by Greenfield. Depressingly common, as this blog has pointed out many times. But now even the non-neuroscientist reviewers aren’t buying her claims. © 2014 Guardian News and Media Limited
Link ID: 20011 - Posted: 08.30.2014
By PAM BELLUCK Memories and the feelings associated with them are not set in stone. You may have happy memories about your family’s annual ski vacation, but if you see a tragic accident on the slopes, those feelings may change. You might even be afraid to ski that mountain again. Now, using a technique in which light is used to switch neurons on and off, neuroscientists at the Massachusetts Institute of Technology appear to have unlocked some secrets about how the brain attaches emotions to memories and how those emotions can be adjusted. Their research, published Wednesday in the journal Nature, was conducted on mice, not humans, so the findings cannot immediately be translated to the treatment of patients. But experts said the experiments may eventually lead to more effective therapies for people with psychological problems such as depression, anxiety or post-traumatic stress disorder. “Imagine you can go in and find a particular traumatic memory and turn it off or change it somehow,” said David Moorman, an assistant professor of psychological and brain sciences at the University of Massachusetts Amherst, who was not involved in the research. “That’s still science fiction, but with this we’re getting a lot closer to it.” The M.I.T. scientists labeled neurons in the brains of mice with a light-sensitive protein and used pulses of light to switch the cells on and off, a technique called optogenetics. Then they identified patterns of neurons activated when mice created a negative memory or a positive one. A negative memory formed when mice received a mild electric shock to their feet; a positive one was formed when the mice, all male, were allowed to spend time with female mice. © 2014 The New York Times Company
by Penny Sarchet Memory is a fickle beast. A bad experience can turn a once-loved coffee shop or holiday destination into a place to be avoided. Now experiments in mice have shown how such associations can be reversed. When forming a memory of a place, the details of the location and the associated emotions are encoded in different regions of the brain. Memories of the place are formed in the hippocampus, whereas positive or negative associations are encoded in the amygdala. In experiments with mice in 2012, a group led by Susumo Tonegawa of the Massachusetts Institute of Technology managed to trigger the fear part of a memory associated with a location when the animals were in a different location. They used a technique known as optogenetics, which involves genetically engineering mice so that their brains produce a light-sensitive protein in response to a certain cue. In this case, the cue was the formation of the location memory. This meant the team could make the mouse recall the location just by flashing pulses of light down an optical fibre embedded in the skull. The mice were given electric shocks while their memories of the place were was being formed, so that the animals learned to associate that location with pain. Once trained, the mice were put in a new place and a pulse of light was flashed into their brains. This activated the neurons associated with the original location memory and the mice froze, terrified of a shock, demonstrating that the emotion associated with the original location could be induced by reactivating the memory of the place. © Copyright Reed Business Information Ltd.
Learning is easier when it only requires nerve cells to rearrange existing patterns of activity than when the nerve cells have to generate new patterns, a study of monkeys has found. The scientists explored the brain’s capacity to learn through recordings of electrical activity of brain cell networks. The study was partly funded by the National Institutes of Health. “We looked into the brain and may have seen why it’s so hard to think outside the box,” said Aaron Batista, Ph.D., an assistant professor at the University of Pittsburgh and a senior author of the study published in Nature, with Byron Yu, Ph.D., assistant professor at Carnegie Mellon University, Pittsburgh. The human brain contains nearly 86 billion neurons, which communicate through intricate networks of connections. Understanding how they work together during learning can be challenging. Dr. Batista and his colleagues combined two innovative technologies, brain-computer interfaces and machine learning, to study patterns of activity among neurons in monkey brains as the animals learned to use their thoughts to move a computer cursor. “This is a fundamental advance in understanding the neurobiological patterns that underlie the learning process,” said Theresa Cruz, Ph.D., a program official at the National Center for Medical Rehabilitations Research at NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). “The findings may eventually lead to new treatments for stroke as well as other neurological disorders.”
Erin Allday It's well established that chronic pain afflicts people with more than just pain. With the pain come fatigue and sleeplessness, depression and frustration, and a noticeable disinterest in so many of the activities that used to fill a day. It makes sense that chronic pain would leave patients feeling weary and unmotivated - most people wouldn't want to go to work or shop for a week's worth of groceries or even meet friends for dinner when they're exhausted and in pain. But experts in pain and neurology say the connection between chronic pain and a lousy mood may be biochemical, something more complicated than a dour mood brought on from persistent, long-term discomfort alone. Now, a team of Stanford neurologists have found evidence that chronic pain triggers a series of molecular changes in the brain that may sap patients' motivation. "There is an actual physiologic change that happens," said Dr. Neil Schwartz, a post-doctoral scientist who helped lead the Stanford research. "The behavior changes seem quite primary to the pain itself. They're not just a consequence of living with it." Schwartz and his colleagues hope their work could someday lead to new treatments for the behavior changes that come with chronic pain. In the short term, the research improves understanding of the biochemical effects of chronic pain and may be a comfort to patients who blame themselves for their lack of motivation, pain experts said. © 2014 Hearst Communications, Inc.
By ANNA NORTH “You can learn a lot from what you see on a screen,” said Yalda T. Uhls. However, she told Op-Talk, “It’s not going to give you context. It’s not going to give you the big picture.” Ms. Uhls, a researcher at the Children’s Digital Media Center in Los Angeles, was part of a team that looked at what happened when kids were separated from their screens — phones, iPads, laptops and the like — for several days. Their findings may have implications for adults’ relationship to technology, too. For a paper published in the journal Computers in Human Behavior, the researchers studied 51 sixth-graders who attended a five-day camp where no electronic devices were allowed. Before and after the camp, they tested the kids’ emotion-recognition skills using photos of facial expressions and sound-free video clips designed to measure their reading of nonverbal cues. The kids did significantly better on both tests after five screen-free days; a group of sixth-graders from the same school who didn’t go to camp showed less or no improvement. Ms. Uhls, who also works for the nonprofit Common Sense Media, told Op-Talk that a number of factors might have been at play in the campers’ improvement. For instance, their time in nature might have played a role. But to her, the most likely explanation was the sheer increase in face-to-face interaction: “The issue really is not that staring at screens is going to make you bad at recognizing emotions,” she said. “It’s more that if you’re looking at screens you’re not looking at the world, and you’re not looking at people.” Many adults have sought out the same Internet-free experience the kids had, though they usually don’t go to camp to get it. The novelist Neil Gaiman took a “sabbatical from social media” in 2013, “so I can concentrate on my day job: making things up.” © 2014 The New York Times Company
Link ID: 20006 - Posted: 08.28.2014
by Bethany Brookshire Premenstrual syndrome, or PMS, can be a miserable experience. Women report over 200 symptoms in the days before menstruation occurs. The complaints run the gamut from irritable mood to bloating. PMS can be so slight you don’t even notice, or it can be so severe it has its own category — premenstrual dysphoric disorder. But to some, PMS is just a punchline, a joke featured in pop culture from Buffy the Vampire Slayer to Saturday Night Live. Michael Gillings, who studies molecular evolution at Macquarie University in Sydney, thinks that PMS could have a purpose. In a perspective piece published August 11 in Evolutionary Adaptations, Gillings proposes that PMS confers an evolutionary advantage, increasing the likelihood that a woman will leave an infertile mate. He hopes that his idea could lead to more research and less stigma about the condition. But while his hypothesis certainly sparked a lot of discussion, whether it is likely, or even necessary, is in doubt. Gillings first began to think about PMS when he found out that premenstrual dysphoric disorder was being added to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders. “I started to think that we have a normal distribution of PMS responses, where some people don’t get any symptoms, the majority gets mild symptoms, and some get severe symptoms,” he explains. Including PMDD in DSM-5 made a statement, he says, that “we were going to take one end of this normal curve, the extreme far right end, and we were going to draw a line and say, those people there have a disease we’re going to label in our book. But if 80 percent of women get some kind of premenstrual symptoms, then it’s normal. And I wondered, if it’s so normal, what could be the reason for it?” © Society for Science & the Public 2000 - 2014.
|By Roni Jacobson Almost immediately after Albert Hofmann discovered the hallucinogenic properties of LSD in the 1940s, research on psychedelic drugs took off. These consciousness-altering drugs showed promise for treating anxiety, depression, post-traumatic stress disorder (PTSD), obsessive-compulsive disorder (OCD) and addiction, but increasing government conservatism caused a research blackout that lasted decades. Lately, however, there has been a resurgence of interest in psychedelics as possible therapeutic agents. This past spring Swiss researchers published results from the first drug trial involving LSD in more than 40 years. Although the freeze on psychedelic research is thawing, scientists say that restrictive drug policies are continuing to hinder their progress. In the U.S., LSD, psilocybin, MDMA, DMT, peyote, cannabis and ibogaine (a hallucinogen derived from an African shrub) are all classified as Schedule I illegal drugs, which the U.S. Drug Enforcement Administration defines as having a high potential for abuse and no currently accepted medical applications—despite extensive scientific evidence to the contrary. In a joint report released in June, the Drug Policy Alliance and the Multidisciplinary Association for Psychedelic Studies catalogue several ways in which they say that the DEA has unfairly obstructed research on psychedelics, including by overruling an internal recommendation in 1986 that MDMA be placed on a less restrictive schedule. The DEA and the U.S. Food and Drug Administration maintain that there is insufficient research to justify recategorization. This stance creates a catch-22 by basing the decision on the need for more research while limiting the ability of scientists to conduct that research. © 2014 Scientific American
|By Michael Leon I had been working quite happily on the basic biology of the brain when a good friend of mine called for advice about his daughter, who had just been diagnosed with autism. I could hear the anguish and fear in his voice when he asked me whether there was anything that could be done to make her better. I told him about the standard-care therapies, including Intensive Behavioral Intervention, Early Intensive Behavioral Intervention, Applied Behavior Analysis, and the Early Start Denver Model (ESDM). These therapies also are expensive, time-consuming and have variable outcomes, with the best outcomes seen for ESDM. There are, however, few ESDM therapists, and the cost of such intensive therapy can be quite high. Moreover, my friend’s daughter was already past the age of the oldest children in the study that demonstrated the efficacy of ESDM. My feeling was that there was a good chance that there was an effective therapy for her using a simple, inexpensive at-home approach involving daily exposure to a wide variety of sensory stimulation. This is a partial list of the disorders whose symptoms can be greatly reduced, or even completely reversed, with what is known as “environmental enrichment”: Autism Stroke Seizures Brain damage Neuronal death during aging ADHD Prenatal alcohol syndrome Lead exposure Multiple sclerosis Addiction Schizophrenia Memory loss Huntington’s disease Parkinson’s disease Alzheimer’s disease Down syndrome Depression But why haven’t you heard about this? The reason is that all of these disorders that have been successfully treated only in animal models of these neurological problems. However, the effects seen in lab animals can be dramatic. © 2014 Scientific American,
by Tom Siegfried René Descartes was a very clever thinker. He proved his own existence, declaring that because he thought, he must exist: “I think, therefore I am.” But the 17th century philosopher-mathematician-scientist committed a serious mental blunder when he decided that the mind doing the thinking was somehow separate from the brain it lived in. Descartes believed that thought was insubstantial, transmitted from the ether to the pineal gland, which played the role of something like a Wi-Fi receiver embedded deep in the brain. Thereafter mind-brain dualism became the prevailing prejudice. Nowadays, though, everybody with a properly working brain realizes that the mind and brain are coexistent. Thought processes and associated cognitive mental activity all reflect the physics and chemistry of cells and molecules inhabiting the brain’s biological tissue. Many people today do not realize, though, that there’s a modern version of Descartes’ mistaken dichotomy. Just as he erroneously believed the mind was distinct from the brain, some scientists have mistakenly conceived of the brain as distinct from the body. Much of the early research in artificial intelligence, for instance, modeled the brain as a computer, seeking to replicate mental life as information processing, converting inputs to outputs by logical rules. But even if such a machine could duplicate the circuitry of the brain, it would be missing essential peripheral input from an attached body. Actual intelligence requires both body and brain, as the neurologist Antonio Damasio pointed out in his 1994 book, Descartes’ Error. “Mental activity, from its simplest aspects to its most sublime, requires both brain and body proper,” Damasio wrote. © Society for Science & the Public 2000 - 2013.
Link ID: 20002 - Posted: 08.27.2014
By Michael Balter Humans are generally highly cooperative and often impressively altruistic, quicker than any other animal species to help out strangers in need. A new study suggests that our lineage got that way by adopting so-called cooperative breeding: the caring for infants not just by the mother, but also by other members of the family and sometimes even unrelated adults. In addition to helping us get along with others, the advance led to the development of language and complex civilizations, the authors say. Cooperative breeding is not unique to humans. Up to 10% of birds are cooperative breeders, as are meerkats and New World monkeys such as tamarins and marmosets. But our closest primate relatives, great apes such as chimpanzees, are not cooperative breeders. Because the human and chimpanzee lineages split between 5 million and 7 million years ago, and humans are the only apes that engage in cooperative breeding, researchers have puzzled over how this helping behavior might have evolved all over again on the human line. In the late 1990s, Sarah Blaffer Hrdy, now an anthropologist emeritus at the University of California, Davis, proposed the cooperative breeding hypothesis. According to her model, early in their evolution humans added cooperative breeding behaviors to their already existing advanced ape cognition, leading to a powerful combination of smarts and sociality that fueled even bigger brains, the evolution of language, and unprecedented levels of cooperation. Soon after Hrdy’s proposal, anthropologists Carel van Schaik and Judith Burkart of the University of Zurich in Switzerland began to test some of these ideas, demonstrating that cooperatively breeding primates like marmosets engaged in seemingly altruistic behavior by helping other marmosets get food with no immediate reward to themselves. © 2014 American Association for the Advancement of Science.
Daniel Cressey In many respects, the modern electronic cigarette is not so different from its leaf-and-paper predecessor. Take a drag from the mouthpiece and you get a genuine nicotine fix — albeit from a fluid wicked into the chamber of a battery-powered atomizer and vaporized by a heating element. Users exhale a half-convincing cloud of ‘smoke’, and many e-cigarettes even sport an LED at the tip that glows blue, green or classic red to better simulate the experience romanticized by countless writers and film-makers. The only things missing are the dozens of cancer-causing chemicals found in this digital wonder’s analogue forebears. E-cigarettes — also known as personal vaporizers or electronic nicotine-delivery systems among other names — are perhaps the most disruptive devices that public-health researchers working on tobacco control have ever faced. To some, they promise to snuff out a behaviour responsible for around 100 million deaths in the twentieth century. Others fear that they could perpetuate the habit, and undo decades of work. Now, a group once united against a common enemy is divided. “These devices have really polarized the tobacco-control community,” says Michael Siegel, a physician and tobacco researcher at Boston University School of Public Health in Massachusetts. “You now have two completely opposite extremes with almost no common ground between them.” Evidence is in short supply on both sides. Even when studies do appear, they are often furiously debated. And it is not just researchers who are attempting to catch up with the products now pouring out of Chinese factories: conventional tobacco companies are pushing into the nascent industry, and regulators are scrambling to work out what to do. © 2014 Nature Publishing Group
Keyword: Drug Abuse
Link ID: 20000 - Posted: 08.27.2014
|By Roni Jacobson Children are notoriously unreliable witnesses. Conventional wisdom holds that they frequently “remember” things that never happened. Yet a large body of research indicates that adults actually generate more false memories than children. Now a new study finds that children are just as susceptible to false memories as adults, if not more so. Scientists may simply have been using the wrong test. Traditionally, researchers have explored false memories by presenting test subjects with a list of associated words (for instance, “weep,” “sorrow” and “wet”) thematically related to a word not on the list (in this case, “cry”) and then asking them what words they remember. Adults typically mention the missing related word more often than children do—possibly because their life experiences enable them to draw associations between concepts more readily, says Henry Otgaar, a forensic psychologist at Maastricht University in the Netherlands and co-author of the new paper, published in May in the Journal of Experimental Child Psychology. Instead of using word lists to investigate false memories, Otgaar and his colleagues showed participants pictures of scenes, including a classroom, a funeral and a beach. After a short break, they asked those participants whether they remembered seeing certain objects in each picture. Across three experiments, seven- and eight-year-old children consistently reported seeing more objects that were not in the pictures than adults did. © 2014 Scientific American
By Priyanka Pulla Humans are late bloomers when compared with other primates—they spend almost twice as long in childhood and adolescence as chimps, gibbons, or macaques do. But why? One widely accepted but hard-to-test theory is that children’s brains consume so much energy that they divert glucose from the rest of the body, slowing growth. Now, a clever study of glucose uptake and body growth in children confirms this “expensive tissue” hypothesis. Previous studies have shown that our brains guzzle between 44% and 87% of the total energy consumed by our resting bodies during infancy and childhood. Could that be why we take so long to grow up? One way to find out is with more precise studies of brain metabolism throughout childhood, but those studies don’t exist yet. However, a new study published online today in the Proceedings of the National Academy of Sciences (PNAS) spliced together three older data sets to provide a test of this hypothesis. First, the researchers used a 1987 study of PET scans of 36 people between infancy and 30 years of age to estimate age trends in glucose uptake by three major sections of the brain. Then, to calculate how uptake varied for the entire brain, they combined that data with the brain volumes and ages of 400 individuals between 4.5 years of age and adulthood, gathered from a National Institutes of Health study and others. Finally, to link age and brain glucose uptake to body size, they used an age series of brain and body weights of 1000 individuals from birth to adulthood, gathered in 1978. © 2014 American Association for the Advancement of Science.
By DAVID LEVINE MONTREAL — When twins have similar personalities, is it mainly because they share so much genetic material or because their physical resemblance makes other people treat them alike? Most researchers believe the former, but the proposition has been hard to prove. So Nancy L. Segal, a psychologist who directs the Twin Studies Center at California State University, Fullerton, decided to test it — and enlisted an unlikely ally. He is François Brunelle, a photographer in Montreal who takes pictures of pairs of people who look alike but are not twins. Dr. Segal was sent to Mr. Brunelle’s website by a graduate student who knew of her research with twins. When she saw the photographs, she realized that the unrelated look-alikes would be ideal study subjects: She could compare their similarities and differences to those of actual twins. “I reasoned that if personality resides in the face,” she said, “then unrelated look-alikes should be as similar in behavior as identical twins reared apart. Alternatively, if personality traits are influenced by genetic factors, then unrelated look-alikes should show negligible personality similarity.” For 14 years, Mr. Brunelle, 64, has been working on a project he calls “I’m Not a Look-Alike!”: more than 200 black-and-white portraits of pairs who do, in fact, look startlingly alike. “I originally named the project ‘Look-Alikes,’ but I felt it was boring and some of the subjects did not feel they looked alike,” he said. “The new name gives ownership to the people I photographed and allows viewers of my website to decide for themselves if the people look alike or not.” Most come to him through social media links to his website. “It has taken on a life of its own,” he said. “I have heard from people in China — and even a man who has an uncle in Uzbekistan who is a dead ringer for former President George W. Bush.” © 2014 The New York Times Company
Keyword: Genes & Behavior
Link ID: 19997 - Posted: 08.26.2014
Erin Allday When a person suddenly loses the ability to speak or to understand what others are saying, the hardships that cascade from that loss can be overwhelming - from the seemingly trite to the devastatingly depressing. What hit Derrick Wong, 49, hardest was losing the ability to tell a joke. Ralph Soriano, 56, hates taking his car to the mechanic, knowing he will barely understand what's being said. "Girls," said Luke Waterman, 30, with a sigh. Flirting used to come easy. All three men - actually a pretty happy, hopeful gang for the most part - are longtime members of a group therapy program at the Aphasia Center of California, an Oakland nonprofit that offers treatment and ongoing education to people who have suffered communication disorders as a result of stroke or other brain injury. The nonprofit specializes in long-term therapy, an area of aphasia treatment that has taken off in the past few years. For many decades, doctors and speech pathologists assumed that patients had a window of six months to a year to recover language skills lost to a brain injury. Now, anecdotal reports and clinical research suggest that the window is much wider, and may even stay open a lifetime. "There is evidence that people can improve and regain skills, even years after a stroke," said Blair Menn, a speech language pathologist at Kaiser Permanente Medical Center in Redwood City. © 2014 Hearst Communications, Inc.
By NICHOLAS BAKALAR Childhood treatment with human growth hormone is strongly associated with an increased risk for stroke in early adulthood, a new study has found. The study adds evidence to previous reports suggesting an increased cardiac and cerebrovascular risk in children treated with growth hormone. Researchers studied 6,874 children, average age 11, who were small for their age but otherwise generally healthy and were treated with growth hormone from 1985 to 1996. They followed them to an average age of 28. There were 11 strokes in the group, four of them fatal. The analysis found that this was more than twice as many strokes as would be expected in a population this size, a statistically significant difference. The results, published online in the journal Neurology, were particularly striking for hemorrhagic stroke, the type caused by a ruptured blood vessel — there were more than seven times as many as would be expected. The authors acknowledged that they were unable to take into account some risk factors for stroke, such as family history and smoking. “Subjects on growth hormones should not panic on reading these results,” said the senior author, Dr. Joël Coste, a professor of biostatistics and epidemiology at the Hôtel Dieu hospital in Paris. “The doctor prescribing the hormone or the family doctor should be consulted and will be able to inform and advise patients.” © 2014 The New York Times Company
by Jennifer Viegas Spritzing dogs with a “pig perfume” helps prevent them from barking incessantly, jumping frantically on house guests and from engaging in other unwanted behaviors, according to new research. The eau de oink, aka “Boar Mate” or “Stop That,” was formulated by Texas Tech scientist John McGlone, who was looking for a way to curb his Cairn terrier Toto’s non-stop barking. One spritz of the pig perfume seemed to do the trick in an instant without harming his dog. “It was completely serendipitous,” McGlone, who works in the university’s Animal and Food Sciences department of the College of Agriculture and Natural Sciences, said in a press release. “One of the most difficult problems is that dogs bark a lot, and it’s one of the top reasons they are given back to shelters or pounds.” The key ingredient is androstenone, a steroid and pheromone produced by male pigs and released in their saliva and fat. When detected by female pigs in heat, they seem to find the male more attractive. (The females assume a mating stance.) One can imagine that dogs spritzed with the scent should not hang around amorous female pigs, but other than that, the product seems to work, according to McGlone. Androstenone smells pungent and is not very appealing to humans, but it can have an effect on mammal behavior, he said. © 2014 Discovery Communications, LLC.
Keyword: Chemical Senses (Smell & Taste)
Link ID: 19994 - Posted: 08.26.2014