Most Recent Links
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Regina Nuzzo The gut may know better than the head whether a marriage will be smooth sailing or will hit the rocks after the honeymoon fades, according to research published today in Science1. Researchers have long known that new love can be blind, and that those in the midst of it can harbour positive illusions about their sweetheart and their future. Studies show that new couples rate their partner particularly generously, forgetting his or her bad qualities, and generally view their relationship as more likely to succeed than average2. But newlyweds are also under a lot of conscious pressure to be happy — or, at least, to think they are. Now a four-year study of 135 young couples has found that split-second, 'visceral' reactions about their partner are important, too. The results show that these automatic attitudes, which aren’t nearly as rosy as the more deliberate ones, can predict eventual changes in people’s marital happiness, perhaps even more so than the details that people consciously admit. The researchers, led by psychologist James McNulty of Florida State University in Tallahassee, tapped into these implicit attitudes by seeing how fast newlyweds could correctly classify positively and negatively themed words after being primed by a photo of their spouse for a fraction of a second. If seeing a blink-of-the-eye flash of a partner’s face conjures up immediate, positive gut-level associations, for example, the participant will be quicker to report that 'awesome' is a positive word and slower to report that 'awful' is a negative one. Researchers used the difference between these two reaction times as a measurement of a participant’s automatic reaction. © 2013 Nature Publishing Group
by Laura Sanders If you own a television, a computer or a smartphone, you may have seen ads for Lumosity, the brain-training regimen that promises to sharpen your wits and improve your life. Take the bait, and you’ll first create a profile that includes your age, how much sleep you get, the time of day you’re most productive and other minutiae about your life and habits. After this digital debriefing, you can settle in and start playing games designed to train simple cognitive skills like arithmetic, concentration and short-term recall. The 50 million people signed up for Lumosity presumably have done so because they want to improve their brains, and these games promise an easy, fun way to do that. The program also offers metrics, allowing users to chart their progress over weeks, months and years. Written in these personal digital ledgers are clues that might help people optimize their performance. With careful recordkeeping, for example, you might discover that you hit peak brainpower after precisely one-and-a-half cups of medium roast coffee at 10:34 a.m. on Tuesdays. But you’re not the only one who has access to this information. With each click, your performance data will fly by Internet into the eager hands of scientists at Lumos Labs, the San Francisco company that created Lumosity. Giant datasets like this one, created as a by-product of people paying money to learn about and improve themselves, will revolutionize research in human health and behavior, some scientists believe. Lumos Labs researchers hope that their brain-training data in particular could reveal deep truths about how the human mind works. They believe that they have a nimble, customizable and cheap way to discover things about the brain that would otherwise take huge amounts of money and many years to unearth with standard lab-based studies. Other researchers have also taken note, and some have gotten permission to use Lumosity data in their own research. Some of these researchers are hunting for subtle signatures of Alzheimer’s in the data. Others are investigating more fundamental mysteries with cross-cultural studies of how the brain builds emotions and how memory works. © Society for Science & the Public 2000 - 2013.
By David Nutt Imagine being an astronomer in a world where the telescope was banned. This effectively happened in the 1600s when, for over 100 years, the Catholic Church prohibited access to knowledge of the heavens in a vain attempt to stop scientists proving that the earth was not the center of the universe. ‘Surely similar censorship could never happen today,’ I hear you say—but it does in relation to the use of drugs to study the brain. Scientists and doctors are banned from studying many hundreds of drugs because of outdated United Nations charters dating back to the 1960s and 1970s. Some of the banned drugs include cannabis, psychedelics and MDMA (now widely known as ecstasy). The most remarkable example is that of the psychedelic LSD, a drug accidentally discovered by the Swiss chemist Albert Hofmann while he was working for the pharmaceutical company Sandoz to find new treatments for migraine. Once the ability of LSD to alter brain function became apparent, Hofmann and others realized it had enormous potential as a tool to explore and treat the brain. The immediate effects of LSD to alter brain states offered unique insight into states such as consciousness and psychosis; the long-lasting changes in self-awareness it brought on were seen as potentially useful for conditions such as addiction. Pharmaceutical company Sandoz saw LSD as so important that they chose to make it widely available to researchers in the 1950s. Researchers conducted over 1,000 studies at that time, most of which yielded significant results. However, once young Americans started using the drug recreationally—partly in protest against the Vietnam War—it was banned, both there and all over the world. Since then, research into the science behind the drug and its effects on the brain has come to a halt. Yet, we have begun to rectify the situation using the shorter-acting psychedelic psilocybin (also known as magic mushrooms). In just a couple of experiments, scientists have discovered remarkable and unexpected effects on the brain, leading them to start a clinical trial in depression. Other therapeutic targets for psychedelics are cluster headaches, OCD and addiction. © 2013 Scientific American
At the Society for Neuroscience meeting earlier this month in San Diego, California, Science sat down with Geoffrey Ling, deputy director of the Defense Sciences Office at the Defense Advanced Research Projects Agency (DARPA), to discuss the agency’s plans for the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a neuroscience research effort put forth by President Barack Obama earlier this year. So far, DARPA has released two calls for grant applications, with at least one more likely: The first, called SUBNETS (Systems-Based Neurotechnology for Emerging Therapies), asks researchers to develop novel, wireless devices, such as deep brain stimulators, that can cure neurological disorders such as posttraumatic stress (PTS), major depression, and chronic pain. The second, RAM (Restoring Active Memory), calls for a separate wireless device that repairs brain damage and restores memory loss. Below is an extended version of a Q&A that appears in the 29 November issue of Science. Q: Why did DARPA get involved in the BRAIN project? G.L.: It’s really focused on our injured warfighters, but it has a use for civilians who have stress disorders and civilians who also have memory disorders from dementia and the like. But at the end of the day, it is still meeting [President Obama’s] directive. Of all the things he could have chosen—global warming, alternative fuels—he chose this, so in my mind the neuroscience community should be as excited as all get-up. Q: Why does SUBNETS focus on deep brain stimulation (DBS)? G.L.: We’ve opened the possibility of using DBS but we haven’t exclusively said that. We’re challenging people to go after neuropsychiatric disorders like PTS [and] depression. We’re challenging the community to come up with something in 5 years that’s clinically feasible. DBS is an area that has really been traditionally underfunded, so we thought what the heck, let’s give it a go—in this new BRAIN Initiative the whole idea is to go after the things that there aren’t 400 R01 grants for—and let’s be bold, and boy, if it works, fabulous. © 2013 American Association for the Advancement of Science
By Tanya Lewis 20 hours ago To understand the human brain, scientists must start small, and what better place than the mind of a worm? The roundworm Caenorhabditis elegans is one of biology's most widely studied organisms, and it's the first to have the complete wiring diagram, or connectome, of its nervous system mapped out. Knowing the structure of the animal's connectome will help explain its behavior, and could lead to insights about the brains of other organisms, scientists say. "You can't understand the brain without understanding the connectome," Scott Emmons, a molecular geneticist at Albert Einstein College of Medicine of Yeshiva University in New York, said in a talk earlier this month at the annual meeting of the Society for Neuroscience in San Diego. In 1963, South African biologist Sydney Brenner of the University of Cambridge decided to use C. elegans as a model organism for developmental biology. He chose the roundworm because it has a simple nervous system, it's easy to grow in a lab and its genetics are relatively straightforward. C. elegans was the first multicellular organism to have its genome sequenced, in 1998. Brenner knew that to understand how genes affect behavior, "you would have to know the structure of the nervous system," Emmons told LiveScience.
By Dwayne Godwin and Jorge Cham Dwayne Godwin is a neuroscientist at the Wake Forest University School of Medicine. His Twitter handle is @brainyacts. Jorge Cham draws the comic strip Piled Higher and Deeper at www.phdcomics.com. © 2013 Scientific American
Keyword: Brain imaging
Link ID: 18980 - Posted: 11.30.2013
by Bethany Brookshire Most people take it as a given that distraction is bad for — oh, hey, a squirrel! Where was I? … Right. Most people take it as a given that distraction is bad for memory. And most of the time, it is. But under certain conditions, the right kind of distraction might actually help you remember. Nathan Cashdollar of University College London and colleagues were looking at the effects of distraction on memory in memory-impaired patients. They were specifically looking at distractions that were totally off-topic from a particular task, and how those distractions affected memory performance. Their results were published November 27 in the Journal of Neuroscience. The researchers worked with a small group of people with severe epilepsy who had lesions in the hippocampus, and therefore had memory problems. They compared them to groups of people with epilepsy without lesions, young healthy people, and older healthy people that were matched to the epilepsy group. Each of the participants went through a memory task called “delayed match-to-sample.” For this task, participants are given a set of samples or pictures, usually things like nature scenes. Then there’s a delay, from one second at the beginning of the test on up to nearly a minute. Then participants are shown another nature scene. Is it one they have seen before? Yes or no? The task starts out simply, with only one nature scene to match, but soon becomes harder, with up to five pictures to remember, and a five-second delay. People with memory impairments did a lot worse when they had more items to remember (called high cognitive load), falling off very steeply in their performance. Normal controls did better, still remaining fairly accurate, but making mistakes once in a while. © Society for Science & the Public 2000 - 2013.
By Emilie Reas Did you make it to work on time this morning? Go ahead and thank the traffic gods, but also take a moment to thank your brain. The brain’s impressively accurate internal clock allows us to detect the passage of time, a skill essential for many critical daily functions. Without the ability to track elapsed time, our morning shower could continue indefinitely. Without that nagging feeling to remind us we’ve been driving too long, we might easily miss our exit. But how does the brain generate this finely tuned mental clock? Neuroscientists believe that we have distinct neural systems for processing different types of time, for example, to maintain a circadian rhythm, to control the timing of fine body movements, and for conscious awareness of time passage. Until recently, most neuroscientists believed that this latter type of temporal processing – the kind that alerts you when you’ve lingered over breakfast for too long – is supported by a single brain system. However, emerging research indicates that the model of a single neural clock might be too simplistic. A new study, recently published in the Journal of Neuroscience by neuroscientists at the University of California, Irvine, reveals that the brain may in fact have a second method for sensing elapsed time. What’s more, the authors propose that this second internal clock not only works in parallel with our primary neural clock, but may even compete with it. Past research suggested that a brain region called the striatum lies at the heart of our central inner clock, working with the brain’s surrounding cortex to integrate temporal information. For example, the striatum becomes active when people pay attention to how much time has passed, and individuals with Parkinson’s Disease, a neurodegenerative disorder that disrupts input to the striatum, have trouble telling time. © 2013 Scientific American
By Neuroskeptic Claims that children with autism have abnormal brain white matter connections may just reflect the fact that they move about more during their MRI scans. So say a team of Harvard and MIT neuroscientists, including Nancy “Voodoo Correlations” Kanwisher, in a new paper: Spurious group differences due to head motion in a diffusion MRI study. Essentially, the authors show how head movement during a diffusion tensor imaging (DTI) scan causes apparant differences in the integrity of white matter tracts, like these ones: In comparisons of two randomized groups of healthy children – in whom no white matter differences ought to appear – spurious effects were seen whenever one group moved more than the other: As for autism, the authors found that kids with autism moved more, on average, than controls, and that matching the two groups by motion reduced the magnitude of the group differences in white matter (though many remained significant). Technically, the motion-related differences manifested as increases in RD and reductions in FA; these were localized: The pathways that exhibited the most substantial motion-induced group differences in our data were the corpus callosum and the cingulum bundle. Perhaps this is related to their proximity to non-brain voxels (such as the ventricles) … deeper brain areas appear to be more affected than more superﬁcial ones, thus distance from the head coils may also be a factor. The good news is that there’s a simple fix: entering the motion parameters, extracted from the DTI data itself, as a covariate in the analysis. The authors show that this is extremely effective. The bad news is that most researchers don’t do this.
Peter Hildebrand Neuroscience is a rapidly growing field, but one that is usually thought to be too complex and expensive for average Americans to participate in directly. Now, an explosion of cheap scientific devices and online tutorials are on the verge of changing that. This change could have exciting implications for our future understanding of the brain. From 1995 to 2005, the amount of money spent on neuroscience research doubled. A lot of that research used medical devices, like MRI and CT Scan machines, and drugs that everyday citizens don’t have access to. Even in colleges, experience with powerful research equipment is reserved for upperclassmen and graduate students. The lowlier castes can work with models or dissect animal brains, but as scientist and engineer Greg Gage points out in this TED video, the brain isn’t like the heart or the lungs. You can’t tell how it works just by looking at it. Gage is calling for “neuro-revolution,” in which scientists and inventors come together to put the tools for learning neuroscience into the hands of the public. He may be onto something too, because those tools are looking more accessible than ever before. One of the most well publicized examples of this punk rock revolution has been Gage’s own “SpikerBox,” which he co-developed with Tim Marzullo. Roughly the size of your fist, the SpikerBox is a small collection of electronic components bolted between two squares of orange plastic. Coming out of one end are two pins that you can use to record the electrical activity of nerve cells in, say, a recently severed cockroach leg. There’s also a port that allows you to attach the box to a smartphone or tablet, and watch the spikes of activity as the neurons are stimulated. © 2013 Salon Media Group, Inc.
Keyword: Brain imaging
Link ID: 18976 - Posted: 11.26.2013
Scientists at the National Institutes of Health have used RNA interference (RNAi) technology to reveal dozens of genes which may represent new therapeutic targets for treating Parkinson’s disease. The findings also may be relevant to several diseases caused by damage to mitochondria, the biological power plants found in cells throughout the body. “We discovered a network of genes that may regulate the disposal of dysfunctional mitochondria, opening the door to new drug targets for Parkinson’s disease and other disorders,” said Richard Youle, Ph.D., an investigator at the National Institute of Neurological Disorders and Stroke (NINDS) and a leader of the study. The findings were published online in Nature. Dr. Youle collaborated with researchers from the National Center for Advancing Translational Sciences (NCATS). Mitochondria are tubular structures with rounded ends that use oxygen to convert many chemical fuels into adenosine triphosphate, the main energy source that powers cells. Multiple neurological disorders are linked to genes that help regulate the health of mitochondria, including Parkinson’s, and movement diseases such as Charcot-Marie Tooth Syndrome and the ataxias. Some cases of Parkinson’s disease have been linked to mutations in the gene that codes for parkin, a protein that normally roams inside cells, and tags damaged mitochondria as waste. The damaged mitochondria are then degraded by cells’ lysosomes, which serve as a biological trash disposal system. Known mutations in parkin prevent tagging, resulting in accumulation of unhealthy mitochondria in the body.
Ed Yong A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced. Psychology has been buffeted in recent years by mounting concern over the reliability of its results, after repeated failures to replicate classic studies. A failure to replicate could mean that the original study was flawed, the new experiment was poorly done or the effect under scrutiny varies between settings or groups of people. To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries. The team chose a mix of effects that represent the diversity of psychological science, from classic experiments that have been repeatedly replicated to contemporary ones that have not. Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested. © 2013 Nature Publishing Group
Link ID: 18974 - Posted: 11.26.2013
By Jill U. Adams, Every morning I am greeted by Facebook friends complaining of sleepless nights or awakenings. I know the feeling — as do many other Americans. In a 2005 survey of 1,506 Americans by the National Sleep Foundation, 54 percent reported at least one symptom of insomnia — difficulty falling asleep, waking a lot during the night, waking up too early or waking up feeling unrefreshed — at least a few nights a week over the previous year. Thirty-three percent said they had experienced symptoms almost every night. If insomnia visited me that often, I’d be tempted to pick up something at the pharmacy — something easy, something safe, something that didn’t involve making a doctor’s appointment. Indeed, 10 to 20 percent of Americans take over-the-counter sleep aids each year, according to the American Academy of Sleep Medicine. The way they’re marketed, over-the-counter sleep aids sound very appealing: The new product ZzzQuil (yes, from the maker of NyQuil) promises “a beautiful night’s sleep;” an ad says you’ll “fall asleep faster and stay asleep longer” after using Unisom. Companies marketing the herb valerian root and the hormone melatonin as over-the-counter sleep aids make similar claims. But what’s the evidence that supports these claims? “It’s quite lean,” says Andrew Krystal, who directs the sleep research program at Duke University. Over-the-counter sleep aids work differently from prescription drugs for insomnia. Most are simply antihistamines in sheep’s clothing. (Yes, that’s a joke.) The majority of them — ZzzQuil, TylenolPM and Unisom SleepGels — contain diphenhydramine as the active ingredient, the same compound in Benadryl. (Unisom SleepTabs use doxylamine, another antihistamine.) © 1996-2013 The Washington Post
Link ID: 18973 - Posted: 11.26.2013
By RONI CARYN RABIN Women are more likely than men to die after a heart attack, and some researchers have suggested a reason: Doctors may be misdiagnosing women more often because their symptoms differ from those experienced by men. But a study published Monday indicates that too much has been made of gender differences in chest pain, the hallmark symptom of heart disease. Although the researchers found some distinctions, no pattern was clearly more characteristic of women or could be used to improve heart attack diagnosis in women, the authors concluded. “We should stop treating women differently at the emergency room when they present with chest pain and discomfort,” said Dr. Maria Rubini Gimenez, a cardiologist at University Hospital Basel and lead author of the new study, published in JAMA Internal Medicine. Instead, she said, all patients with acute chest pain must be evaluated for heart attack with appropriate diagnostics, including an electrocardiogram and blood tests. Roughly 80 percent of people who have chest pain and discomfort are suffering from indigestion, acid reflux or another relatively benign condition, said Dr. John G. Canto, director of the chest pain center at Lakeland Regional Medical Center in Lakeland, Fla., who has researched heart attack diagnosis. “The trick is, how do you figure out the 15 to 20 percent actually having a heart attack?” he said. The new research confirms “that there is a lot of overlap in symptoms between patients who are having a heart attack and those who aren’t, and there is a lot of overlap in symptoms between men and women.” The new study examined 2,475 patients, including 796 women, who reported to emergency rooms at nine hospitals in Switzerland, Spain and Italy complaining of acute chest pain between April 21, 2006, and Aug. 12, 2012. Copyright 2013 The New York Times Company
A whiff of oxytocin may help love not fade away. Researchers asked 20 unmarried men in multiyear relationships to rank the attractiveness of pictures of their partner, acquaintances, and strangers. When the men received a nasal spray of oxytocin—which is released by the body during sexual arousal—they rated their partners more highly but not the other women. MRI scans show that after an oxytocin dose, areas of the brain associated with rewards, which also drive drug addiction, were more active when the men saw pictures of their partner, the researchers report online today in the Proceedings of the National Academy of Sciences. The finding could help explain the biological roots of monogamy in humans: Being in a long-term relationship raises a person's oxytocin levels, which in turn increase the psychological reward of spending more time with that person. The cycle, the team concluded, could literally lead to an addiction to one’s lover. © 2013 American Association for the Advancement of Science
By James Gallagher Health and science reporter, BBC News Steroids given to help premature babies develop may also be slightly increasing the risk of mental health disorders, say researchers. The drugs are often given to pregnant mothers at risk of a premature birth to help the baby's lungs prepare for life outside the womb. The study, in the journal PLoS One, showed there was a higher risk of attention disorders at age eight. The charity Bliss said it reinforced the need for regular health checks. Being born too soon can lead to long-term health problems and the earlier the birth the greater the problems. One immediate issue is the baby's lungs being unprepared to breathe air. Steroids can help accelerate lung development. However, the study by researchers at Imperial College London and the University of Oulu in Finland showed the drugs may also be affecting the developing brain. They compared what happened to 37 premature children whose mother was injected with steroids with 185 premature children, of the same weight and gestational age, who were not exposed to the extra dose of steroid. When the children were followed to the age of eight, there was a higher incidence of attention deficit hyperactivity disorder. No difference could be detected at age 16, but this may have been due to the small size of the study. BBC © 2013
By MARY LOU JEPSEN IN my early 30s, for a few months, I altered my body chemistry and hormones so that I was closer to a man in his early 20s. I was blown away by how dramatically my thoughts changed. I was angry almost all the time, thought about sex constantly, and assumed I was the smartest person in the entire world. Over the years I had met guys rather like this. I was not experimenting with hormone levels out of idle curiosity or in some kind of quirky science experiment. I was on hormone treatments because I’d had a tumor removed along with part of my pituitary gland, which makes key hormones the body needs to function. This long journey may have started as early as 1978, when I was 13. I spent a summer in intensive care with an unknown disease. After that summer, I never thought I would live a long life. So I wanted to live, to do interesting, fascinating work in the limited time I thought I had left. I took on the math-intensive art form of holography, and in my early 20s traveled the world, living on university fellowships to pursue this esoteric craft. I didn’t date much, really — perhaps because I didn’t have many hormones, though I didn’t know that at the time. I worked as an artist, played in a band, met Andy Warhol, Christo, Lou Reed and David Byrne. I had fun. But the gravity of my illness grew in the 1990s. The growth that shut down my pituitary gland’s ability to produce hormones did so insidiously over many years. By my early 20s it was, I suspect in retrospect, causing misdiagnosis of symptoms that were most likely caused by lack of hormones like cortisol. No diagnosis was found, despite the efforts of many doctors. I was a doctoral student in electrical engineering at an Ivy League school, but was growing progressively worse. I routinely slept about 20 hours a day, lived with a constant blistering headache and frequent vomiting, and was periodically wheelchair-bound. Large sections of my skin cycled through a rainbow of colors and sores, half of my face wouldn’t move as if Novocain had been applied. I drooled. Worse: I felt stupid. I couldn’t subtract anymore. I couldn’t make a to-do list, let alone accomplish items on one. I recognized that I wasn’t capable of continuing in graduate school. Utterly defeated, I filled out the paperwork to drop out. © 2013 The New York Times Company
By NATASHA SINGER One afternoon a few months ago, a 45-year-old sales representative named Mike called “The Dr. Harry Fisch Show,” a weekly men’s health program on the Howard Stern channel on Sirius XM Radio, where no male medical or sexual issue goes unexplored. “I feel like a 70-year-old man in a 45-year-old body,” Mike, from Vancouver, British Columbia, told Dr. Fisch on the live broadcast. “I want to feel good. I don’t want to feel tired all day.” A regular listener, Mike had heard Dr. Fisch, a Park Avenue urologist and fertility specialist, talk about a phenomenon called “low testosterone” or “low T.” Dr. Fisch likes to say that a man’s testosterone level is “the dipstick” of his health; he regularly appears on programs like “CBS This Morning” to talk about the malaise that may coincide with low testosterone. He is also the medical expert featured on IsItLowT.com, an informational website sponsored by AbbVie, the drug maker behind AndroGel, the best-selling prescription testosterone gel. Like many men who have seen that site or commercials or online quizzes about “low T,” Mike suspected that diminished testosterone was the cause of his lethargy. And he hoped, as the marketing campaigns seem to suggest, that taking a prescription testosterone drug would make him feel more energetic. “I took your advice and I went and got my testosterone checked,” Mike told Dr. Fisch. Mike’s own physician, he related, told him that his testosterone “was a little low” and prescribed a testosterone medication. Mike also said he had diabetes and high blood pressure and was 40 pounds overweight. Dr. Fisch explained that conditions like obesity might be accompanied by decreased testosterone and energy, and he urged Mike to exercise more and to lose weight. But if Mike had trouble overhauling his diet and exercise habits, Dr. Fisch said, taking testosterone might give him the boost he needed to do so. “If it gives you more energy to exercise,” Dr. Fisch said of the testosterone drug, “I’m all for it.” © 2013 The New York Times Company
By Janet Davison, CBC News If headlines in the past few weeks are to be believed, a "Flesh-eating 'zombie' drug" that could devour users "from the inside out" is finding its way onto American streets. Then came reports suggesting that "krokodil," a cheap and highly addictive homemade substitute for heroin that surfaced first in Russia about 10 years ago, had appeared in Ontario's Niagara region. But so far, neither the U.S. Drug Enforcement Agency nor Health Canada has identified krokodil, also known as desomorphine, in any samples they've analyzed since the DEA found two instances of it in 2004. And police in Niagara are now saying the reported cases of the drug — an ugly concoction of codeine mixed with common products such as gasoline, lighter fluid, paint thinner or industrial cleaning oil — haven't been medically confirmed. Krokodil is named for the Russian word for crocodile and its tendency to turn users' skin rough and scaly. The injectable opioid can cause brain damage and severe tissue damage, sometimes leading to gangrene, amputations and even death. It has also been linked to pneumonia, blood poisoning, meningitis, liver and kidney problems, rotting gums and bone infections. The horrific health problems the drug has caused among the well over 100,000 users in Russia and Ukraine have been well documented by researchers in publications such as the International Journal of Drug Policy. But so far there is no solid, official proof that krokodil has reached Canada. The recent news reports about the drug coupled with the lack of hard evidence to back them up underline how difficult it is for health and law enforcement officials to keep up with the evolving mix of street drugs. © CBC 2013
Keyword: Drug Abuse
Link ID: 18967 - Posted: 11.25.2013
by Erika Engelhaupt If you had to have a prosthetic hand, would you want it to look like a real hand? Or would you prefer a gleaming metallic number, something that doesn’t even try to look human? A new study looks at one of the issues that prosthetic designers and wearers face in making this decision: the creepy factor. People tend to get creeped out by robots or prosthetic devices that look almost, but not quite, human. So Ellen Poliakoff and colleagues at the University of Manchester in England had people rate the eeriness of various prosthetic hands. Forty-three volunteers looked at photographs of prosthetic and real hands. They rated both how humanlike (realistic) the hands were and how eerie they were, defined as “mysterious, strange, or unexpected as to send a chill up the spine.” Real human hands were rated both the most humanlike and the least eerie (a good thing for humans). Metal hands that were clearly mechanical were rated the least humanlike, but less eerie overall than prosthetic hands made to look like real hands, the team reports in the latest issue of Perception. The realistic prosthetics, like the rubber hand shown above, fell into what's known as the uncanny valley. That term, invented by roboticist Matsuhiro Mori in 1970, describes how robots become unnerving as they come to look more humanlike. The superrealistic Geminoid DK robot and the animated characters in the movie The Polar Express suffer from this problem. They look almost human, but not quite, and this mismatch between expectation and reality is one of the proposed explanations for the uncanny valley. In particular, if something looks like a human but doesn’t quite move like one, it’s often considered eerie. © Society for Science & the Public 2000 - 2013