Chapter 1. Biological Psychology: Scope and Outlook
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Tash Reith-Banks I discovered Rob Newman’s comedy when I was 16. His shows were relentless: packed full of quotes, arguments, anger, history, philosophy and, above all, bladder-ruining laughs. Oil, urban angst, war, climate change and capitalism – Newman tore into all of these subject and more with verve, wit, and what must have been a well-used library card. Twenty years on his latest piece, The Brain Show, finds Newman on good form. He’s less angry young man, more genial, worried uncle. The laughs are still very much there, perhaps a shade gentler. One thing is still guaranteed: you’ll leave with a brain significantly fuller than before and a long reading list. The show itself majors on a sceptical look at neuroscience, especially what Newman sees as attempts to reduce the human brain to the status of a “wet computer”. He pours particular scorn on two experiments aimed at portioning the brain into neat, discrete emotional zones; he feels similarly about geneticists who think they can identify a homelessness gene, or one for low-voter turnout. Brian Cox gets a special mention for being a figurehead for lazily generalised science, with a wicked impression of Cox walking an audience through the growing and evolving human brain. Robert Newman: The Brain Show review – chewy neuro-comedy Dissing bad science, capitalists and Brian Cox, Robert Newman’s low-octane cabinet of neuroscientific curiosities has nonconformist bite As Newman later pointed out to me, citing Stephen Jay Gould: “the world we make, makes us. Cro-Magnon had the same brain as us, possibly slightly larger. Everything we’ve done since then has been the product of evolution on a brain of unvarying capacity.” © 2016 Guardian News and Media Limited
By Emily Underwood In 2008, in El Cajon, California, 30-year-old John Nicholas Gunther bludgeoned his mother to death with a metal pipe, and then stole $1378 in cash, her credit cards, a DVD/VCR player, and some prescription painkillers. At trial, Gunther admitted to the killing, but argued that his conviction should be reduced to second-degree murder because he had not acted with premeditation. A clinical psychologist and neuropsychologist testified that two previous head traumas—one the result of an assault, the other from a drug overdose—had damaged his brain’s frontal lobes, potentially reducing Gunther’s ability to plan the murder, and causing him to act impulsively. The jury didn’t buy Gunther’s defense, however; based on other evidence, such as the fact that Gunther had previously talked about killing his mother with friends, the court concluded that he was guilty of first-degree murder, and gave him a 25-years-to-life prison sentence. Gunther’s case represents a growing trend, a new analysis suggests. Between 2005 and 2012, more than 1585 U.S. published judicial opinions describe the use of neurobiological evidence by criminal defendants to shore up their defense, according to a study published last week in the Journal of Law and the Biosciences by legal scholar Nita Farahany of Duke University in Durham, North Carolina, and colleagues. In 2012 alone, for example, more than 250 opinions cited defendants’ arguments that their “brains made them do it”—more than double the number of similar claims made in 2007. © 2016 American Association for the Advancement of Science
Keyword: Drug Abuse
Link ID: 21816 - Posted: 01.23.2016
By Brian Owens Guy Rouleau, the director of McGill University’s Montreal Neurological Institute (MNI) and Hospital in Canada, is frustrated with how slowly neuroscience research translates into treatments. “We’re doing a really shitty job,” he says. “It’s not because we’re not trying; it has to do with the complexity of the problem.” So he and his colleagues at the renowned institute decided to try a radical solution. Starting this year, any work done there will conform to the principles of the “open- science” movement—all results and data will be made freely available at the time of publication, for example, and the institute will not pursue patents on any of its discoveries. Although some large-scale initiatives like the government-funded Human Genome Project have made all data completely open, MNI will be the first scientific institute to follow that path, Rouleau says. “It’s an experiment; no one has ever done this before,” he says. The intent is that neuroscience research will become more efficient if duplication is reduced and data are shared more widely and earlier. Opening access to the tissue samples in MNI’s biobank and to its extensive databank of brain scans and other data will have a major impact, Rouleau hopes. “We think that it is a way to accelerate discovery and the application of neuroscience.” After a year of consultations among the institute’s staff, pretty much everyone—about 70 principal investigators and 600 other scientific faculty and staff—has agreed to take part, Rouleau says. Over the next 6 months, individual units will hash out the details of how each will ensure that its work lives up to guiding principles for openness that the institute has developed. They include freely providing all results, data, software, and algorithms; and requiring collaborators from other institutions to also follow the open principles. © 2016 American Association for the Advancement of Science.
Link ID: 21813 - Posted: 01.23.2016
By Christof Koch While “size does not matter” is a universally preached dictum among the politically correct, everyday experience tells us that this can't be the whole story—under many conditions, it clearly does. Consider the size of Woody Allen's second favorite organ, the brain. Adjectives such as “highbrow” and “lowbrow” have their origin in the belief, much expounded by 19th-century phrenologists, of a close correspondence between a high forehead—that is, a big brain—and intelligence. Is this true? Does a bigger brain make you necessarily smarter or wiser? And is there any simple connection between the size of a nervous system, however measured, and the mental powers of the owner of this nervous system? While the answer to the former question is a conditional “yes, somewhat,” the lack of any accepted answer to the second one reveals our ignorance of how intelligent behavior comes about. The human brain continues to grow until it reaches its peak size in the third to fourth decade of life. An MRI study of 46 adults of mainly European descent found that the average male had a brain volume of 1,274 cubic centimeters (cm3) and that the average female brain measured 1,131 cm3. Given that a quart of milk equals 946 cm3, you could pour a bit more than that into a skull without any of it spilling out. Of course, there is considerable variability in brain volume, ranging from 1,053 to 1,499 cm3 in men and between 975 and 1,398 cm3 in women. As the density of brain matter is just a little bit above that of water plus some salts, the average male brain weighs about 1,325 grams, close to the proverbial three pounds often cited in U.S. texts. © 2016 Scientific American
By Emily Underwood Lumos Labs, the company that produces the popular “brain-training” program Lumosity, yesterday agreed to pay a $2 million settlement to the Federal Trade Commission (FTC) for running deceptive advertisements. Lumos had claimed that its online games can help users perform better at work and in school and stave off cognitive deficits associated with serious diseases such as Alzheimer’s, traumatic brain injury, and post-traumatic stress. The $2 million settlement will be used to compensate Lumosity consumers who were misled by false advertising, says Michelle Rusk, a spokesperson with FTC in Washington, D.C. The company will also be required to provide an easy way to cancel autorenewal billing for the service, which includes online and mobile app subscriptions, with payments ranging from $14.95 monthly to lifetime memberships for $299.95. Before consumers can access the games, a pop-up screen will alert them to FTC’s order and allow them to avoid future billing, Rusk says. The action is part of a larger crackdown on companies selling products that purportedly enhance memory or provide some other cognitive benefit, Rusk says. For some time now, FTC has been “concerned about some of the claims we’re seeing out there,” particularly those from companies like Lumos that suggest their games can reduce the effects of conditions such as dementia, she says. After evaluating the literature on Lumos's products, and the broader research on the benefits of brain-training games, “our assessment was they didn’t have adequate science for the claims that they’re making,” she says. © 2016 American Association for the Advancement of Science
by Helen Thompson Earth’s magnetic field guides shark movement in the open ocean, but scientists had always suspected that sharks might also get their directions from an array of other factors, including smell. To sniff out smell’s role, biologists clogged the noses of leopard sharks (Triakis semifasciata), a Pacific coastal species that makes foraging trips out to deeper waters. Researchers released the sharks out at sea and tracked their path back to the California coast over four hours. Sharks with an impaired sense of smell only made it 37.2 percent of the way back to shore, while unimpaired sharks made it 62.6 percent of the way back to shore. The study provides the first experimental evidence that smell influences a shark’s sense of direction, the team writes January 6 in PLOS ONE. The animals may be picking up on chemical gradients produced by food sources that live on the coast. © Society for Science & the Public 2000 - 2015.
By R. Douglas Fields We all heard the warning as kids: “That TV will rot your brain!” You may even find yourself repeating the threat when you see young eyes glued to the tube instead of exploring the real world. The parental scolding dates back to the black-and-white days of I Love Lucy, and today concern is growing amid a flood of video streaming on portable devices. But are young minds really being harmed? With brain imaging, the effects of regular TV viewing on a child's neural circuits are plain to see. Studies suggest watching television for prolonged periods changes the anatomical structure of a child's brain and lowers verbal abilities. Behaviorally, even more detrimental effects may exist: although a cause-and-effect relation is hard to prove, higher rates of antisocial behavior, obesity and mental health problems correlate with hours in front of the set. Now a new study hits the pause button on this line of thinking. The researchers conclude that the entire body of research up to now has overlooked an important confounding variable, heredity, that could call into question the conventional wisdom that TV is bad for the brain. Further study will be needed to evaluate this claim, but the combined evidence suggests we need a more nuanced attitude toward our viewing habits. To understand the argument against television, we should rewind to 2013, when a team ofresearchers at Tohoku University in Japan, led by neuroscientist Hikaru Takeuchi, first published findings from a study in which the brains of 290 children between the ages of five and 18 were imaged. The kids' TV viewing habits, ranging from zero to four hours each day, were also taken into account. © 2016 Scientific American
By Elahe Izadi Tiny cameras attached to wild New Caledonian crows capture, for the first time, video footage of these elusive birds fashioning hooked stick tools, according to researchers. These South Pacific birds build tools out of twigs and leaves that they use to root out food, and they're the only non-humans that make hooked tools in the wild, write the authors of a study published Wednesday in the journal Biology Letters. Humans have previously seen the crows making the tools in artificial situations, in which scientists baited feeding sites and provided the raw tools; but researchers say the New Caledonian crows have never been filmed doing this in a completely natural setting. "New Caledonian crows are renowned for their unusually sophisticated tool behavior," the study authors write. "Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds." Study author Jolyon Troscianko of the University of Exeter in England described the tropical birds as "notoriously difficult to observe" because of the terrain of their habitat and their sensitivity to disturbance, he said in a press release. "By documenting their fascinating behavior with this new camera technology, we obtained valuable insights into the importance of tools in their daily search for food," he added.
Tim Radford British scientists believe they have made a huge step forward in the understanding of the mechanisms of human intelligence. That genetic inheritance must play some part has never been disputed. Despite occasional claims later dismissed, no-one has yet produced a single gene that controls intelligence. But Michael Johnson of Imperial College London, a consultant neurologist and colleagues report in Nature Neuroscience that they may have discovered a very different answer: two networks of genes, perhaps controlled by some master regulatory system, lie behind the human gift for lateral thinking, mental arithmetic, pub quizzes, strategic planning, cryptic crosswords and the ability to laugh at limericks. As usual, such research raises potentially politically-loaded questions about the nature of intelligence. “Intelligence is a composite measure of different cognitive abilities and how they are distributed in a population. It doesn’t measure any one thing. But it is measurable,” Dr Johnson said. About 40% of the variation in intelligence is explained by inheritance. The other factors are not yet certain. But the scientists raise the distant possibility that armed with the new information they may be able to devise ways to modify human intelligence. “The idea of ultimately using drugs to affect cognitive performance is not in any way new. We all drink coffee to improve our cognitive performance,” Dr Johnson said. “It’s about understanding the pathways that are related to cognitive ability both in health and disease, especially disease so one day we could help people with learning disabilities fulfill their potential. That is very important.” © 2015 Guardian News and Media Limited
Parrots can dance and talk, and now apparently they can use and share grinding tools. They were filmed using pebbles for grinding, thought to be a uniquely human activity – one that allowed our civilisations to extract more nutrition from cereal-based foods. Megan Lambert from the University of York, UK, and her colleagues were studying greater vasa parrots (Coracopsis vasa) in an aviary when they noticed some of the birds scraping shells in their enclosure with pebbles and date pips. “We were surprised,” says Lambert. “Using tools [to grind] seashells is something never seen before in animals.” Afterwards, the birds would lick the powder from the tool. Some of the parrots even passed tools to each other, which is rarely seen in animals. This behaviour was exclusively male to female. Lambert and her team, who watched the parrots for six months, noticed that the shell-scraping was more frequent before their breeding season. Since seashells contain calcium, which is critical for females before egg-laying, they suspect that the parrots could be manufacturing their own calcium supplements, as the mineral is probably better absorbed in powder form. Greater vasa parrots are native to Madagascar and have breeding and social systems unique among parrots. For example, two or more males have an exclusive sexual relationship with two or more females, and they are unusually tolerant of their group members. The reproductive ritual of sharing tools and grinding could be yet another one of their quirks. © Copyright Reed Business Information Ltd.
By Nicholas Bakalar Watching television may be bad for your brain, a new study suggests. Researchers followed 3,274 people whose average age was 25 at the start of the study for 25 years, using questionnaires every five years to collect data on their physical activity and TV watching habits. At year 25, they administered three tests that measured various aspects of mental acuity. The highest level of TV watching — more than three hours a day most days — was associated with poor performance on all three tests. Compared with those who watched TV the least, those who watched the most had between one-and-a-half and two times the odds of poor performance on the tests, even after adjusting for age, sex, race, educational level, body mass index, smoking, alcohol use, hypertension and diabetes. Those with the lowest levels of physical activity and the highest levels of TV watching were the most likely to have poor test results. The authors acknowledge that their findings, published in JAMA Psychiatry, depend on self-reports, and that they had no baseline tests of cognitive function for comparison. “We can’t separate out what is going on with the TV watching,” said the lead author, Dr. Kristine Yaffe, a professor of psychiatry and neurology at the University of California, San Francisco. “Is it just the inactivity, or is there something about watching TV that’s the opposite of cognitive stimulation?” © 2015 The New York Times Company
Link ID: 21675 - Posted: 12.05.2015
Sara Reardon Panzee the chimpanzee was a skilled communicator that could tell untrained humans where to find hidden food by using gestures and vocalizations. Austin the chimp was particularly adept with a computer, and scientists have been scanning its genome for clues to its unusual cognitive abilities. Both apes lived at a language-research centre at Georgia State University in Atlanta, and both died several years ago — but they will live on in an online database of brain scans and behavioural data from nearly 250 chimpanzees. Researchers hope to combine this trove, now in development, with a biobank of chimpanzee brains to enable scientists anywhere in the world to study the animals’ neurobiology. This push to repurpose old data is especially timely now that the US National Institutes of Health (NIH) has decided to retire its remaining research chimpanzees. The agency decommissioned more than 300 animals in 2013, but kept 50 available for research in case of a public-health emergency. Following an 18 November decision, this remaining population will also be sent to sanctuaries in the coming years. The NIH also hopes to retire another 82 chimps that it supports but does not own, says director Francis Collins. “We were on a trajectory toward zero, and today’s the day we’re at zero,” says Jeffrey Kahn, a bioethicist at Johns Hopkins University in Baltimore, Maryland, who led a 2011 study on the NIH chimp colony for the Institute of Medicine. © 2015 Nature Publishing Group
by Sarah Zielinski Call someone a “bird brain” and they are sure to be offended. After all, it’s just another way of calling someone “stupid.” But it’s probably time to retire the insult because scientists are finding more and more evidence that birds can be pretty smart. Consider these five species: We may call pigeons “flying rats” for their penchant for hanging out in cities and grabbing an easy meal. (Long before there was “pizza rat,” you know there had to be “pizza pigeons” flying around New York City.) But there may be more going on in their brains than just where to find a quick bite. Richard Levenson of the University of California, Davis Medical Center and colleagues trained pigeons to recognize images of human breast cancers. In tests, the birds proved capable of sorting images of benign and malignant tumors. In fact, they were just as good as humans, the researchers report November 18 in PLOS ONE. In keeping with the pigeons’ reputation, though, food was the reward for their performance. No one would suspect the planet’s second-best toolmakers would be small black birds flying through mountain forests on an island chain east of Australia. But New Caledonian crows have proven themselves not only keen toolmakers but also pretty good problem-solvers, passing some tests that even dogs (and pigeons) fail. For example, when scientists present an animal with a bit of meat on a long string dangling down, many animals don’t ever figure out how to get the meat. Pull it up with one yank, and the meat is still out of reach. Some animals will figure out how to get it through trial and error, but a wild New Caledonian crow solved the problem — pull, step on string, pull some more — on its first try. © Society for Science & the Public 2000 - 2015
By James Gallagher Health editor, BBC News website A mass vaccination programme against meningitis A in Africa has been a "stunning success", say experts. More than 220 million people were immunised across 16 countries in the continent's meningitis belt. In 2013 there were just four cases across the entire region, which once faced thousands of deaths each year. However, there are fresh warnings from the World Health Organization that "huge epidemics" could return unless a new vaccination programme is started. The meningitis belt stretches across sub-Saharan Africa from Gambia in the west to Ethiopia in the east. In the worst epidemic recorded, in 1996-97, the disease swept across the belt infecting more than a quarter of a million people and led to 25,000 deaths. Unlike other vaccines, the MenAfriVac was designed specifically for Africa and in 2010 a mass vaccination campaign was started. "The disease has virtually disappeared from this part of the world," said Dr Marie-Pierre Preziosi from the World Health Organization. The mass immunisation programme was aimed at people under 30. However, routine vaccination will be needed to ensure that newborns are not vulnerable to the disease. Projections, published in the journal Clinical Infectious Diseases, showed the disease could easily return. Dr Preziosi told the BBC News website: "What could happen is a huge epidemic that could sweep the entire area, that could target hundreds of thousands of people with 5-10% deaths at least. © 2015 BBC
Link ID: 21624 - Posted: 11.11.2015
Richard A. Friedman YOU can increase the size of your muscles by pumping iron and improve your stamina with aerobic training. Can you get smarter by exercising — or altering — your brain? Stories from Our Advertisers This is hardly an idle question considering that cognitive decline is a nearly universal feature of aging. Starting at age 55, our hippocampus, a brain region critical to memory, shrinks 1 to 2 percent every year, to say nothing of the fact that one in nine people age 65 and older has Alzheimer’s disease. The number afflicted is expected to grow rapidly as the baby boom generation ages. Given these grim statistics, it’s no wonder that Americans are a captive market for anything, from supposed smart drugs and supplements to brain training, that promises to boost normal mental functioning or to stem its all-too-common decline. The very notion of cognitive enhancement is seductive and plausible. After all, the brain is capable of change and learning at all ages. Our brain has remarkable neuroplasticity; that is, it can remodel and change itself in response to various experiences and injuries. So can it be trained to enhance its own cognitive prowess? The multibillion-dollar brain training industry certainly thinks so and claims that you can increase your memory, attention and reasoning just by playing various mental games. In other words, use your brain in the right way and you’ll get smarter. A few years back, a joint study by BBC and Cambridge University neuroscientists put brain training to the test. Their question was this: Do brain gymnastics actually make you smarter, or do they just make you better at doing a specific task? For example, playing the math puzzle KenKen will obviously make you better at KenKen. But does the effect transfer to another task you haven’t practiced, like a crossword puzzle? © 2015 The New York Times Company
Jon Hamilton For a few days this week, a convention center in Chicago became the global epicenter of brain science. Nearly 30,000 scientists swarmed through the vast hallways of the McCormick Place convention center as part of the annual Society for Neuroscience meeting. Among them were Nobel Prize winners, the director of the National Institutes of Health, and scores of researchers regarded as the international rock stars of neuroscience. "It's amazing. I'm a bit overwhelmed," said Kara Furman, a graduate student from Yale who was attending her first Society for Neuroscience meeting. Furman was just one of several hundred neuroscientists I found standing in lines outside the center one afternoon, waiting for shuttle buses. She was pondering a presentation from a few hours earlier that she found "pretty mind-blowing." What was it about? "Using MRI techniques to access dopamine release at the molecular level," she told me, deadpan. Welcome to the five-day annual event that's become known simply as "The Neuro Meeting." It's where brain scientists from around the world come to present their own work and discover the "mind-blowing" research others are doing. And there are thousands of presentations to choose from. "I prepared an itinerary based on my interests and that ran into 20 pages," said Srinivas Bharath from the National Institute of Mental Health and Neurosciences in Bangalore, India. © 2015 npr
Link ID: 21553 - Posted: 10.23.2015
By Melissa Dahl Next time you feel you are in danger of losing an argument, make some obscure reference to the brain. Any nod to neuroscience will do, even if it doesn’t actually illuminate the problem at hand or prove anything that halfway resembles a point. People tend to find explanations that include references to the brain very convincing, even if those references are mostly nonsense, according to the latest episode of "Psych Crunch," a podcast hosted by psychologist (and Science of Us contributor) Christian Jarrett. Jarrett interviews Sara Hodges, a research psychologist at the University of Oregon and the co-author of a study published this May on the appeal of “superfluous neuroscience information.” In it, Hodges and her colleagues presented students with a variety of explanations for various psychological phenomena. Some of these explanations were not really explanations at all, but rather just a restatement of the facts already presented. The students considered explanations for various quirks of human behavior from the fields of social science, biological science, and neuroscience, and rated how convincing they found each explanation. “The social sciences would refer to something about how people were raised, and the hard-science explanation referred to changes in DNA, the structure of DNA,” Hodges explained to Jarrett. The neuroscience explanation, on the other hand, would pretty much just name an area of the brain thought to be associated with the behavior at hand and leave matters at that, without really explaining anything. Even still, Hodges said, the “neuroscience explanations always came out on top — better than no explanation, better than social science, better than the hard science.” © 2015, New York Media LLC
Link ID: 21552 - Posted: 10.23.2015
by Ben Cipollini Thanks to Ms. Amazing, it’s now cliche to say, but hey… I really love SfN. For the uninitiated SfN is a thirty thousand person international conference for neuroscience–a conference so large, only a few cities in the US can handle it. Yes, that’s a giant C-SPAN2 bus that's dwarfed by this small section of the “Great Room”. For many, SfN evokes fear and dread; it’s truly overwhelming in its size, breadth, and depth. For me, it was love at first “OM*G!!!”. Don’t believe me, scientists? Let’s review the data: I loathe running, but I actually do it at SfN. One needs wheels to get from talks to posters to talks again. We filled the New Orleans convention center in 2012; it’s so long you you can actually get directions from one end of it to the other on Google Maps. Yes, that map does say “1.0 kilometers”. I hate crowds, but I will fight through the poster session crowds like a salmon heading upstream to spawn, just to get to one more poster before the end of the session. SfN may have more human traffic jams than China has vehicle jams during Golden week… but that won’t stop me from finding out how callosal connections have properties similar to those of long-range lateral connections, or to understand how hemispherectomy affects functional organization. You’d better too; you never know when one of your research heroes might be presenting the poster, or you’ll find yourself standing in front of a poster that winds up in Science just a few months later.
Link ID: 21522 - Posted: 10.17.2015
By AUSTIN RAMZY HONG KONG — Australian officials have responded to criticism from animal rights activists and celebrities, including the former actress Brigitte Bardot and the singer Morrissey, that a government plan to protect threatened species by killing millions of feral cats is unnecessarily cruel. Gregory Andrews, Australia’s threatened species commissioner, has written open letters to Ms. Bardot and Morrissey saying that feral cats prey on more than 100 of the country’s threatened species and that they were a “major contributor” to the extinction of at least 27 mammal species in the country over the past 200 years. He called some of the extinct species, such as the lesser bilby, desert bandicoot, crescent nailtail wallaby and big-eared hopping mouse, “delightful creatures, rich in importance in Australian indigenous culture, and formerly playing important roles in the ecology of our country. We don’t want to lose any more species like these.” The Australian Department of the Environment says that feral cats are the biggest threat to the country’s mammals, ahead of foxes and habitat loss. The government plan would use poison and traps to kill the cats. In announcing the plan in July, Greg Hunt, the environment minister, said that he wanted two million feral cats culled by 2020. Australia has an estimated 20 million feral cats, which are an invasive species brought by European settlers. Calls to exterminate the cats have been floated before, including one in the 1990s that called for killing all feral cats by 2020. © 2015 The New York Times Company
Keyword: Animal Rights
Link ID: 21512 - Posted: 10.15.2015
By Martin Enserink Researchers who conduct animal studies often don't use simple safeguards against biases that have become standard in human clinical trials—or at least they don't report doing so in their scientific papers, making it impossible for readers to ascertain the quality of the work, an analysis of more than 2500 journal articles shows. Such biases, conscious or unconscious, can make candidate medical treatments look better than they actually are, the authors of the analysis warn, and lead to eye-catching results that can't be replicated in larger or more rigorous animal studies—or in human trials. Neurologist Malcolm MacLeod of the Centre for Clinical Brain Sciences at the University of Edinburgh and his colleagues combed through papers reporting the efficacy of drugs in eight animal disease models and checked whether the authors reported four measures that are widely acknowledged to reduce the risk of bias. First, if there was an experimental group and a control group, were animals randomly assigned to either one? (This makes it impossible for scientists to, say, assign the healthiest mice or rats to a treatment group, which could make a drug look better than it is.) Second, were the researchers who assessed the outcomes of a trial—for instance, the effect of a treatment on an animal's health—blinded to which animal underwent what procedure? Third, did the researchers calculate in advance the sample size needed to show that they didn't just accumulate data until they found something significant? And finally, did they make a statement about their conflicts of interest? © 2015 American Association for the Advancement of Science