Most Recent Links
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By JONATHAN MAHLER The mother of the bride wore white and gold. Or was it blue and black? From a photograph of the dress the bride posted online, there was broad disagreement. A few days after the wedding last weekend on the Scottish island of Colonsay, a member of the wedding band was so frustrated by the lack of consensus that she posted a picture of the dress on Tumblr, and asked her followers for feedback. “I was just looking for an answer because it was messing with my head,” said Caitlin McNeill, a 21-year-old singer and guitarist. Within a half-hour, her post attracted some 500 likes and shares. The photo soon migrated to Buzzfeed and Facebook and Twitter, setting off a social media conflagration that few were able to resist. As the debate caught fire across the Internet — even scientists could not agree on what was causing the discrepancy — media companies rushed to get articles online. Less than a half-hour after Ms. McNeil’s original Tumblr post, Buzzfeed posted a poll: “What Colors Are This Dress?” As of Friday afternoon, it had been viewed more than 28 million times. (White and gold was winning handily.) At its peak, more than 670,000 people were simultaneously viewing Buzzfeed’s post. Between that and the rest of Buzzfeed’s blanket coverage of the dress Thursday night, the site easily smashed its previous records for traffic. So did Tumblr. Everyone, it seems, had an opinion. And everyone was convinced that he, or she, was right. “I don’t understand this odd dress debate and I feel like it’s a trick somehow,” Taylor Swift wrote on Twitter. “PS it’s OBVIOUSLY BLUE AND BLACK.” “IT’S A BLUE AND BLACK DRESS!” wrote Mindy Kaling. “ARE YOU KIDDING ME,” she continued, including an unprintable modifier for emphasis. © 2015 The New York Times Company
Link ID: 20635 - Posted: 02.28.2015
By Pascal Wallisch If you are just encountering The Dress for the first time, you might first want to click here to see what all the fuss was about. The brain lives in a bony shell. The completely light-tight nature of the skull renders this home a place of complete darkness. So the brain relies on the eyes to supply an image of the outside world, but there are many processing steps between the translation of light energy into electrical impulses that happens in the eye and the neural activity that corresponds to a conscious perception of the outside world. In other words, the brain is playing a game of telephone and—contrary to popular belief—our perception corresponds to the brain’s best guess of what is going on in the outside world, not necessarily to the way things actually are. This has been recognized for at least 150 years, since the time of Hermann von Helmholtz. This week, it was recognized by masses of people on the Internet, who have been debating furiously over what should be a simple question: What color is this dress? Many parts of the brain contribute to any given perception, and it should not be surprising that different people can reconstruct the outside world in different ways. This is true for many perceptual qualities, including form and motion. While this guessing game is going on all the time, it is possible to demonstrate it clearly by generating impoverished stimulus displays that are consistent with different, mutually exclusive interpretations. That means the brain will not necessarily commit to one interpretation, but will switch back and forth. These are known as ambiguous or bi-stable stimuli, and they illustrate the point that the brain is ultimately only guessing when perceiving the world. It usually just has more information to disambiguate the interpretation. © 2014 The Slate Group LLC. All
Link ID: 20634 - Posted: 02.28.2015
Carmen Fishwick It’s not every day that fashion and science come together to polarise the world. Tumblr blogger Caitlin posted a photograph of what is now known as #TheDress – a layered lace dress and jacket that was causing much distress among her friends. The distress spread rapidly across social media, with Taylor Swift admitting she was “confused and scared”. The internet is now made up by people firmly in two camps: the white and gold, and the blue and black – with each thinking the other is completely wrong. But Ron Chrisley, director of the Centre for Research in Cognitive Science at the University of Sussex, believes that the problem mainly lies in the fact that everyone has forgotten we are dealing with a colour illusion. Chrisley said: “The first step in reaching a truce in the dress war is to construct a demonstration that can show to the white-and-gold crowd how the very same dress can also look blue and black under different conditions.” The image below, tweeted by @namin3485, demonstrates that even though the right-hand side of each image is the same, in the context of the two different left halves, the right is interpreted as being either white and gold, or blue and black. So does this mean people who are less self-confident are more likely to be able to see both, at least eventually? Chrisley said: “My guess is it’s not to do with self-confidence. It’s a perceptual issue. I could imagine someone that’s open minded could still see it only one way. This is below the level of us trying to understand other peoples views. It’s more physiological than that.” © 2015 Guardian News and Media Limited
Link ID: 20633 - Posted: 02.28.2015
By Adam Rogers The fact that a single image could polarize the entire Internet into two aggressive camps is, let’s face it, just another Thursday. But for the past half-day, people across social media have been arguing about whether a picture depicts a perfectly nice bodycon dress as blue with black lace fringe or white with gold lace fringe. And neither side will budge. This fight is about more than just social media—it’s about primal biology and the way human eyes and brains have evolved to see color in a sunlit world. Light enters the eye through the lens—different wavelengths corresponding to different colors. The light hits the retina in the back of the eye where pigments fire up neural connections to the visual cortex, the part of the brain that processes those signals into an image. Critically, though, that first burst of light is made of whatever wavelengths are illuminating the world, reflecting off whatever you’re looking at. Without you having to worry about it, your brain figures out what color light is bouncing off the thing your eyes are looking at, and essentially subtracts that color from the “real” color of the object. “Our visual system is supposed to throw away information about the illuminant and extract information about the actual reflectance,” says Jay Neitz, a neuroscientist at the University of Washington. “But I’ve studied individual differences in color vision for 30 years, and this is one of the biggest individual differences I’ve ever seen.” (Neitz sees white-and-gold.) Usually that system works just fine. This image, though, hits some kind of perceptual boundary. That might be because of how people are wired. Human beings evolved to see in daylight, but daylight changes color. WIRED.com © 2015 Condé Nast
Distinct changes in the immune systems of patients with ME or chronic fatigue syndrome have been found, say scientists. Increased levels of immune molecules called cytokines were found in people during the early stages of the disease, a Columbia University study reported. It said the findings could help improve diagnosis and treatments. UK experts said further refined research was now needed to confirm the results. People with ME (myalgic encephalopathy) or CFS (chronic fatigue syndrome) suffer from exhaustion that affects everyday life and does not go away with sleep or rest. They can also have muscle pain and difficulty concentrating. ME can also cause long-term illness and disability, although many people improve over time. It is estimated that around 250,000 people in the UK have the disease. Disease pattern The US research team, who published their findings in the journal Science Advances, tested blood samples from nearly 300 ME patients and around 350 healthy people. They found specific patterns of immune molecules in patients who had the disease for up to three years. These patients had higher levels of of cytokines, particularly one called interferon gamma, which has been linked to the fatigue that follows many viral infections. Healthy patients and those who had the disease for longer than three years did not show the same pattern. Lead author Dr Mady Hornig said this was down to the way viral infections could disrupt the immune system. "It appears that ME/CFS patients are flush with cytokines until around the three-year mark, at which point the immune system shows evidence of exhaustion and cytokine levels drop."
Fatty liver disease, or the buildup of fat in the liver, and sleep apnea, marked by snoring and interruptions of breathing at night, share some things in common. The two conditions frequently strike people who are overweight or obese. Each afflicts tens of millions of Americans, and often the diseases go undiagnosed. Researchers used to believe that sleep apnea and fatty liver were essentially unrelated, even though they occur together in many patients. But now studies suggest that the two may be strongly linked, with sleep apnea directly exacerbating fatty liver. In a study published last year in the journal Chest, researchers looked at 226 obese middle-aged men and women who were referred to a clinic because they were suspected of having sleep apnea. They found that two-thirds had fatty liver disease, and that the severity of the disease increased with the severity of their sleep apnea. A study last year in The Journal of Pediatrics found a similar relationship in children. The researchers identified sleep apnea in 60 percent of young subjects with fatty liver disease. The worse their apnea episodes, the more likely they were to have fibrosis, or scarring of the liver. Though it is still somewhat unclear, some doctors suspect that the loss of oxygen from sleep apnea may increase chronic inflammation, which worsens fatty liver. Although fat in the liver can be innocuous at first, as inflammation sets in, the fat turns to scar tissue, and that can lead to liver failure. © 2015 The New York Times Company
Link ID: 20630 - Posted: 02.28.2015
By Nicholas Weiler The grizzled wolf stalks from her rival’s den, her mouth caked with blood of the pups she has just killed. It’s a brutal form of birth control, but only the pack leader is allowed to keep her young. For her, this is a selfish strategy—only her pups will carry on the future of the pack. But it may also help the group keep its own numbers in check and avoid outstripping its resources. A new survey of mammalian carnivores worldwide proposes that many large predators have the ability to limit their own numbers. The results, though preliminary, could help explain how top predators keep the food chains beneath them in balance. Researchers often assume that predator numbers grow and shrink based on their food supply, says evolutionary biologist Blaire Van Valkenburgh of the University of California, Los Angeles, who was not involved in the new study. But several recent examples, including an analysis of the resurgent wolves of Yellowstone National Park, revealed that some large predators keep their own numbers in check. The new paper is the first to bring all the evidence together, Van Valkenburgh says, and presents a “convincing correlation.” Hunting and habitat loss are killing off big carnivores around the world, just as ecologists are discovering how important they are for keeping ecosystems in balance. Mountain lions sustain woodlands by hunting deer that would otherwise graze the landscape bare. Coyotes protect scrub-dwelling birds by keeping raccoons and foxes in line. Where top carnivores disappear, these smaller predators often explode in numbers, with potentially disastrous consequences for small birds and mammals. © 2015 American Association for the Advancement of Science
By Elizabeth Pennisi Last week, researchers expanded the size of the mouse brain by giving rodents a piece of human DNA. Now another team has topped that feat, pinpointing a human gene that not only grows the mouse brain but also gives it the distinctive folds found in primate brains. The work suggests that scientists are finally beginning to unravel some of the evolutionary steps that boosted the cognitive powers of our species. “This study represents a major milestone in our understanding of the developmental emergence of human uniqueness,” says Victor Borrell Franco, a neurobiologist at the Institute of Neurosciences in Alicante, Spain, who was not involved with the work. The new study began when Wieland Huttner, a developmental neurobiologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, and his colleagues started closely examining aborted human fetal tissue and embryonic mice. “We specifically wanted to figure out which genes are active during the development of the cortex, the part of the brain that is greatly expanded in humans and other primates compared to rodents,” says Marta Florio, the Huttner graduate student who carried out the main part of the work. That was harder than it sounded. Building a cortex requires several kinds of starting cells, or stem cells. The stem cells divide and sometimes specialize into other types of “intermediate” stem cells that in turn divide and form the neurons that make up brain tissue. To learn what genes are active in the two species, the team first had to develop a way to separate out the various types of cortical stem cells. © 2015 American Association for the Advancement of Science
by Helen Thomson We meet in a pub, we have a few drinks, some dinner and then you lean in for a kiss. You predict, based on our previous interactions, that the kiss will be reciprocated – rather than landing you with a slap in the face. All our social interactions require us to anticipate another person's undecided intentions and actions. Now, researchers have discovered specific brain cells that allow monkeys to do this. It is likely that the cells do the same job in humans. Keren Haroush and Ziv Williams at Harvard Medical School trained monkeys to play a version of the prisoner's dilemma, a game used to study cooperation. The monkeys sat next to each other and decided whether or not they wanted to cooperate with their companion, by moving a joystick to pick either option. Moving the joystick towards an orange circle meant cooperate, a blue triangle meant "not this time". Neither monkey could see the other's face, or receive any clues about their planned action. If the monkeys cooperated, both received four drops of juice. If one cooperated and the other decided not to, the one who cooperated received one drop, and the other received six drops of juice. If both declined to work together they both received two drops of juice. Once both had made their selections, they could see what the other monkey had chosen and hear the amount of juice their companion was enjoying. © Copyright Reed Business Information Ltd.
Link ID: 20627 - Posted: 02.27.2015
Elizabeth Gibney DeepMind, the Google-owned artificial-intelligence company, has revealed how it created a single computer algorithm that can learn how to play 49 different arcade games, including the 1970s classics Pong and Space Invaders. In more than half of those games, the computer became skilled enough to beat a professional human player. The algorithm — which has generated a buzz since publication of a preliminary version in 2013 (V. Mnih et al. Preprint at http://arxiv.org/abs/1312.5602; 2013) — is the first artificial-intelligence (AI) system that can learn a variety of tasks from scratch given only the same, minimal starting information. “The fact that you have one system that can learn several games, without any tweaking from game to game, is surprising and pretty impressive,” says Nathan Sprague, a machine-learning scientist at James Madison University in Harrisonburg, Virginia. DeepMind, which is based in London, says that the brain-inspired system could also provide insights into human intelligence. “Neuroscientists are studying intelligence and decision-making, and here’s a very clean test bed for those ideas,” says Demis Hassabis, co-founder of DeepMind. He and his colleagues describe the gaming algorithm in a paper published this week (V. Mnih et al. Nature 518, 529–533; 2015. Games are to AI researchers what fruit flies are to biology — a stripped-back system in which to test theories, says Richard Sutton, a computer scientist who studies reinforcement learning at the University of Alberta in Edmonton, Canada. “Understanding the mind is an incredibly difficult problem, but games allow you to break it down into parts that you can study,” he says. But so far, most human-beating computers — such as IBM’s Deep Blue, which beat chess world champion Garry Kasparov in 1997, and the recently unveiled algorithm that plays Texas Hold ’Em poker essentially perfectly (see Nature http://doi.org/2dw; 2015)—excel at only one game. © 2015 Nature Publishing Group
By Michael Erard Freckle, a male rhesus monkey, was greeted warmly by his fellow monkeys at his new home in Amherst, Massachusetts, when he arrived in 2000. But he didn’t return the favor: He terrorized his cagemate by stealing his fleece blanket and nabbed each new blanket the researchers added, until he had 10 and his cagemate none. After a few months, Freckle had also acquired a new name: Ivan, short for Ivan the Terrible. Freckle/Ivan, now at Melinda Novak’s primate research lab at the University of Massachusetts, may be unusual in having two names, but all of his neighbors have at least one moniker, Novak says. “You can say, ‘Kayla and Zoe are acting out today,’ and everybody knows who Kayla and Zoe are,” Novak says. “If you say ‘ZA-56 and ZA-65 are acting up today,’ people pause.” Scientists once shied away from naming research animals, and many of the millions of mice and rats used in U.S. research today go nameless, except for special individuals. But a look at many facilities suggests that most of the other 891,161 U.S. research animals have proper names, including nonhuman primates, dogs, pigs, rabbits, cats, and sheep. Rats are Pia, Splinter, Oprah, Persimmon. Monkeys are Nyah, Nadira, Tas, Doyle. One octopus is called Nixon. Breeder pairs of mice are “Tom and Katie,” or “Brad and Angelina.” If you’re a mouse with a penchant for escape, you’ll be Mighty Mouse or Houdini. If you’re a nasty mouse, you’ll be Lucifer or Lucifina. Animals in research are named after shampoos, candy bars, whiskeys, family members, movie stars, and superheroes. They’re named after Russians (Boris, Vladimir, Sergei), colors, the Simpsons, historical figures, and even rival scientists. These unofficial names rarely appear in publications, except sometimes in field studies of primates. But they’re used daily. © 2015 American Association for the Advancement of Science.
Keyword: Animal Rights
Link ID: 20625 - Posted: 02.27.2015
By DENISE GRADY Faced with mounting evidence that general anesthesia may impair brain development in babies and young children, experts said Wednesday that more research is greatly needed and that when planning surgery for a child, parents and doctors should consider how urgently it is required, particularly in children younger than 3 years. In the United States each year, about a million children younger than 4 have surgery with general anesthesia, according to the Food and Drug Administration. So far, the threat is only a potential one; there is no proof that children have been harmed. The concern is based on two types of research. Experiments in young monkeys and other animals have shown that commonly used anesthetics and sedatives can kill brain cells, diminish learning and memory and cause behavior problems. And studies in children have found an association between learning problems and multiple exposures to anesthesia early in life — though not single exposures. But monkeys are not humans, and association does not prove cause and effect. Research now underway is expected to be more definitive, but results will not be available for several years. Anesthesiologists and surgeons are struggling with how — and sometimes whether — to explain a theoretical hazard to parents who are already worried about the real risks of their child’s medical problem and the surgery needed to correct it. If there is a problem with anesthesia, in many cases it may be unavoidable because there are no substitute drugs. The last thing doctors want to do is frighten parents for no reason or prompt them to delay or cancel an operation that their child needs. “On the one hand, we don’t want to overstate the risk, because we don’t know what the risk is, if there is a risk,” said Dr. Randall P. Flick, a pediatric anesthesiologist and director of Mayo Clinic Children’s Center in Rochester, Minn., who has conducted some of the studies in children suggesting a link to learning problems. “On the other hand, we want to make people aware of the risk because we feel we have a duty to do so.” © 2015 The New York Times Compan
People with attention deficit hyperactivity disorder are about twice as likely to die prematurely as those without the disorder, say researchers. Researchers followed 1.92 million Danes, including 32,000 with ADHD, from birth through to 2013. "In this nationwide prospective cohort study with up to 32-year followup, children, adolescents and adults with ADHD had decreased life expectancy and more than double the risk of death compared with people without ADHD," Soren Dalsgaard, from Aarhus University in Denmark, and his co-authors concluded in Wednesday's online issue of Lancet. Actress Kirstie Alley holds a picture of Raymond Perone while testifying in favour of a bill designed to curb the over-prescribing of psychotropic drugs. Danish researchers studying ADHD say medications can reduce symptoms of inattention and impulsivity. (Phil Coale/Associated Press) "People diagnosed with ADHD in adulthood had a greater risk of death than did those diagnosed in childhood and adolescence. This finding could be caused by persistent ADHD being a more severe form of the disorder." Of the 107 individuals with ADHD who died, information on cause of death was available for 79. Of those, 25 died from natural causes and 54 from unnatural causes, including 42 from accidents. Being diagnosed with ADHD along with oppositional defiant disorder, conduct disorder and substance use disorder also increased the risk of death, the researchers found. Mortality risk was also higher for females than males, which led Dalsgaard to stress the need for early diagnosis, especially in girls and women, and to treat co-existing disorders. Although talk of premature death will worry parents and patients, they can seek solace in knowing the absolute risk of premature death at an individual level is low and can be greatly reduced with treatment, Stephen Faraone, a professor of psychiatry and director of child and adolescent psychiatry research at SUNY Upstate Medical University in New York, said in a journal commentary published with the study. ©2015 CBC/Radio-Canada.
|By Matthew Hutson We like to think of our moral judgments as consistent, but they can be as capricious as moods. Research reveals that such judgments are swayed by incidental emotions and perceptions—for instance, people become more moralistic when they feel dirty or sense contamination, such as in the presence of moldy food. Now a series of studies shows that hippies, the obese and “trailer trash” suffer prejudicial treatment because they tend to elicit disgust. Researchers asked volunteers to read short paragraphs about people committing what many consider to be impure acts, such as watching pornography, swearing or being messy. Some of the paragraphs described the individuals as being a hippie, obese or trailer trash—and the volunteers judged these fictional sinners more harshly, according to the paper in the Journal of Experimental Psychology: General. Questionnaires revealed that feelings of disgust toward these groups were driving the volunteers' assessments. A series of follow-up studies solidified the link, finding that these groups also garnered greater praise for purity-related virtues, such as keeping a neat cubicle. If the transgression in question did not involve purity, such as not tipping a waiter, the difference in judgment disappeared. “The assumption people have is that we draw on values that are universal and important,” says social psychologist E. J. Masicampo of Wake Forest University, who led the study, “but something like mentioning that a person is overweight can really push that judgment around. It's triggering these gut-level emotions.” The researchers also looked for real-world effects. © 2015 Scientific American
By Christian Jarrett Imagine a politician from your party is in trouble for alleged misdemeanors. He’s been assessed by an expert who says he likely has early-stage Alzheimer’s. If this diagnosis is correct, your politician will have to resign, and he’ll be replaced by a candidate from an opposing party. This was the scenario presented to participants in a new study by Geoffrey Munro and Cynthia Munro. A vital twist was that half of the 106 student participants read a version of the story in which the dementia expert based his diagnosis on detailed cognitive tests; the other half read a version in which he used a structural MRI brain scan. All other story details were matched, such as the expert’s years of experience in the field, and the detail provided for the different techniques he used. Overall, the students found the MRI evidence more convincing than the cognitive tests. For example, 69.8 percent of those given the MRI scenario said the evidence the politician had Alzheimer’s was strong and convincing, whereas only 39.6 percent of students given the cognitive tests scenario said the same. MRI data was also seen to be more objective, valid and reliable. Focusing on just those students in both conditions who showed skepticism, over 15 percent who read the cognitive tests scenario mentioned the unreliability of the evidence; none of the students given the MRI scenario cited this reason. In reality, a diagnosis of probable Alzheimer’s will always be made with cognitive tests, with brain scans used to rule out other explanations for any observed test impairments. The researchers said their results are indicative of naive faith in the trustworthiness of brain imaging data. “When one contrasts the very detailed manuals accompanying cognitive tests to the absences of formalized operational criteria to guide the clinical interpretation of structural brain MRI in diagnosing disease, the perception that brain MRI is somehow immune to problems of reliability becomes even more perplexing,” they said. WIRED.com © 2015 Condé Nast.
By Francis Shen and Dena Gromet Neuroscience is appearing everywhere. And the legal system is taking notice. The past few years have seen the emergence of “neurolaw.” A spread in the NYT Magazine, a best-selling NYT book, a primetime PBS documentary, the first Law and Neuroscience casebook, and a multimillion-dollar investment from the MacArthur Foundation to fund a Research Network on Law and Neuroscience have all fueled interest in how neuroscience might revolutionize the law. The potential implications of neurolaw are broad. For example, future developments in brain science might allow: criminal law to better identify recidivists; tort law to better differentiate between those in real pain and those who are faking; insurance law to more accurately and adequately compensate those with mental illness; and end-of-life law to more ethically treat patients who might be able to communicate only through their thoughts. Increasingly courts, including the U.S. Supreme Court, and legislatures are citing brain evidence. But despite the media coverage, and much enthusiasm from science and legal elites, our new research shows that Americans know very little about neurolaw, and that Republicans and independents may diverge from Democrats in their support for neuroscience based legal reforms. In our study, we conducted an experiment within a national survey of Americans (more details about the survey are in our article). Everyone in the survey was told that, “Recently developed neuroscientific techniques allow researchers to see inside the human brain as never before.”
Julie Beck When Paul Ekman was a grad student in the 1950s, psychologists were mostly ignoring emotions. Most psychology research at the time was focused on behaviorism—classical conditioning and the like. Silvan Tomkins was the one other person Ekman knew of who was studying emotions, and he’d done a little work on facial expressions that Ekman saw as extremely promising. “To me it was obvious,” Ekman says. “There’s gold in those hills; I have to find a way to mine it.” For his first cross-cultural studies in the 1960s, he traveled around the U.S., Chile, Argentina, and Brazil. In each location, he showed people photos of different facial expressions and asked them to match the images with six different emotions: happiness, sadness, anger, surprise, fear, and disgust. “There was very high agreement,” Ekman says. People tended to match smiling faces with “happiness,” furrow-browed, tight-lipped faces with “anger,” and so on. But these responses could have been influenced by culture. The best way to test whether emotions were truly universal, he thought, would be to repeat his experiment in a totally remote society that hadn’t been exposed to Western media. So he planned a trip to Papua New Guinea, his confidence bolstered by films he’d seen of the island’s isolated cultures: “I never saw an expression I wasn’t familiar with in our culture,” he says. Once there, he showed locals the same photos he’d shown his other research subjects. He gave them a choice between three photos and asked them to pick images that matched various stories (such as “this man’s child has just died”). Adult participants chose the expected emotion between 28 and 100 percent of the time, depending which photos they were choosing among. (The 28 percent was a bit of an outlier: That was when people had to choose between fear, surprise, and sadness. The next lowest rate was 48 percent.) © 2014 by The Atlantic Monthly Group.
Link ID: 20619 - Posted: 02.26.2015
// by Jennifer Viegas It’s long been suspected that males of many species, including humans, can sniff out whether a female is pregnant, and now new research suggests that some — if not all — female primates release a natural “pregnancy perfume” that males can probably detect. What’s more, such scents appear to broadcast whether the mom-to-be is carrying a boy or a girl. The study, published in the journal Biology Letters, focused on lemurs as a model for primates. It presents the first direct evidence in any animal species that a pregnant mother’s scent differs depending on the sex of her baby. The scent signatures “may help guide social interactions, potentially promoting mother–infant recognition, reducing intragroup conflict” or sort out paternity, wrote authors Jeremy Crawford and Christine Drea. The latter presents a loaded scenario, as it could be that males can sense — even before the birth — whether they fathered the baby. The researchers additionally suspect that odors advertising fetal sex may help dads and moms prepare for what’s to come. Crawford, from the University of California, Berkeley, and Drea, from Duke University, used cotton swabs to collect scent secretions from the genital regions of 12 female ringtailed lemurs at the Duke Lemur Center in Durham, N.C., before and during pregnancy. The scientists next used chemical analysis to identify the hundreds of ingredients that make up each female’s scent change during pregnancy. A surprising finding from this is that expectant lemur moms give off simpler scents that contain fewer odor compounds compared with their pre-pregnancy bouquet. The change is more pronounced when the moms are carrying boys, Drea said. © 2015 Discovery Communications, LLC.
by Hal Hodson Video: Bionic arm trumps flesh after elective amputation Bionic hands are go. Three men with serious nerve damage had their hands amputated and replaced by prosthetic ones that they can control with their minds. The procedure, dubbed "bionic reconstruction", was carried out by Oskar Aszmann at the Medical University of Vienna, Austria. The men had all suffered accidents which damaged the brachial plexus – the bundle of nerve fibres that runs from the spine to the hand. Despite attempted repairs to those nerves, the arm and hand remained paralysed. "But still there are some nerve fibres present," says Aszmann. "The injury is so massive that there are only a few. This is just not enough to make the hand alive. They will never drive a hand, but they might drive a prosthetic hand." This approach works because the prosthetic hands come with their own power source. Aszmann's patients plug their hands in to charge every night. Relying on electricity from the grid to power the hand means all the muscles and nerves need do is send the right signals to a prosthetic. Before the operation, Aszmann's patients had to prepare their bodies and brains. First he transplanted leg muscle into their arms to boost the signal from the remaining nerve fibres. Three months later, after the nerves had grown into the new muscle, the men started training their brains. © Copyright Reed Business Information Ltd.
Link ID: 20617 - Posted: 02.26.2015
By Jocelyn Kaiser The number of animals used by the top federally funded U.S. biomedical research institutions has risen 73% over 15 years, a “dramatic increase” driven mostly by more mice, concludes an animal rights group. They say researchers are not doing enough to reduce their use of mice, which are exempt from some federal animal protection laws. The National Institutes of Health (NIH), which collected the data, says the analysis by People for the Ethical Treatment of Animals (PETA) is “inappropriate.” The analysis was published online today in the Journal of Medical Ethics. Although the Animal Welfare Act requires that the U.S. Department of Agriculture track research labs’ use of cats, dogs, and nonhuman primates, smaller vertebrates—including mice, rats, fish, and birds bred for research—are exempt. To get a sense of the trends, PETA filed Freedom of Information Act requests for data from inventories that NIH-funded institutions must submit to NIH every 4 years to receive an “assurance” allowing them to do animal research. Looking at the 25 top NIH-funded institutions, PETA found these institutions housed a daily average of about 74,600 animals between 1997 and 2003; that leaped to an average of about 128,900 a day by 2008 to 2012, a 73% increase. (Because institutions don’t report at the same time, PETA combined figures over three time periods.) © 2015 American Association for the Advancement of Science
Keyword: Animal Rights
Link ID: 20616 - Posted: 02.26.2015