Chapter 1. Biological Psychology: Scope and Outlook
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
by Virginia Morell Alex, an African grey parrot who died 5 years ago and was known for his ability to use English words, also understood a great deal about numbers. In a new study in this month's Cognition, scientists show that Alex correctly inferred the relationship between cardinal and ordinal numbers, an ability that has not previously been found in any species other than humans. After learning the cardinal numbers—or exact values—of one to six, Alex was taught the ordinal values (the position of a number in a list) of seven and eight—that is, he learned that six is less than seven, and seven is less than eight. He was never taught the cardinal values of seven and eight—but when tested on this, he passed with flying colors, apparently inferring, for instance, that the sound "seven" meant six plus one. In the video above of one of these experiments, comparative psychologist Irene Pepperberg of Harvard University asks Alex to pick out the set of colored blocks that equal the number seven. Play the video to hear his answer. © 2010 American Association for the Advancement of Science.
By Cari Nierenberg The strange folds and furrows covering a Brazilian man's entire scalp was neither a funky new look nor a hipster trend. Rather the 21-year-old's bizarre looking scalp with its deep skin folds in a pattern said to resemble the surface of the brain is a sign of a rare medical condition known as cutis verticis gyrata. In this week's New England Journal of Medicine, two Brazilian doctors describe this young man's case and share a picture of its odd appearance. When he was 19, the skin on his scalp started to change. It grew thicker, forming many soft, spongy ridges and narrow ruts. Even his hair had an unusual configuration. It was normal in the furrows but sparser over the folds as is common for this strange scalp condition. No doubt, visits to the barber shop as well as washing his squishy scalp and combing his hair were peculiar experiences. Despite the extent of his scalp affected, "the patient did not have the habit of covering his head," with a hat, for instance, says Dr. Karen Schons a dermatologist at the Hospital Universitario de Santa Maria, who examined the patient and co-authored the case study. In fact, the case study reports that "the condition did not bother him cosmetically." © 2012 NBCNews.com
Link ID: 17386 - Posted: 10.18.2012
Zoë Corbyn Conventional wisdom says that most retractions of papers in scientific journals are triggered by unintentional errors. Not so, according to one of the largest-ever studies of retractions. A survey1 published in Proceedings of the National Academy of Sciences has found that two-thirds of retracted life-sciences papers were stricken from the scientific record because of misconduct such as fraud or suspected fraud — and that journals sometimes soft-pedal the reason. The survey examined all 2,047 articles in the PubMed database that had been marked as retracted by 3 May this year. But rather than taking journals’ retraction notices at face value, as previous analyses have done, the study used secondary sources to pin down the reasons for retraction if the notices were incomplete or vague. These sources included investigations by the US Office of Research Integrity, and evidence reported by the blog Retraction Watch. The analysis revealed that fraud or suspected fraud was responsible for 43% of the retractions. Other types of misconduct — duplicate publication and plagiarism — accounted for 14% and 10% of retractions, respectively. Only 21% of the papers were retracted because of error (see ‘Bad copy’). Earlier studies had found that the percentage of retractions attributable to error was 1.5–3 times higher2–4. “The secondary sources give a very different picture,” says Arturo Casadevall, a microbiologist at Yeshiva University in New York, and a co-author of the latest study. “Retraction notices are often not accurate.” © 2012 Nature Publishing Group
Link ID: 17322 - Posted: 10.02.2012
Clint Witchalls James R. Flynn is Professor Emeritus at the University of Otago, New Zealand. Flynn researches intelligence and is best known for the discovery that, over the past century, IQs have been rising at a rate of about 3 points per decade (the Flynn-effect). In advance of his new book on the subject, Clint Witchalls asked him about this and some of Professor Flynn's more recent research findings: Clint Witchalls: How has our way of thinking and of solving problems changed over the past century? James R. Flynn: Today we take it for granted that using logic on the abstract is an ability we want to cultivate and we are interested in the hypothetical. People from 1900 were not scientifically oriented but utilitarian and they used logic, but to use it on the hypothetical or on abstractions was foreign to them. Alexander Luria [a Soviet psychologist] went to talk to headmen in villages in rural Russia and he said to them: "Where there is always snow, bears are white. At the North Pole there is always snow, what colour are the bears there?" And they said: "I've only seen brown bears." And he said: "What do my words convey?" And they said: "Such a thing as not to be settled by words but by testimony." They didn't settle questions of fact by logic, they settled them by experience. Your research found that we have gained 30 points on IQ tests in a century. What is the reason? The ultimate cause of why IQs are rising is the industrial revolution. The proximate cause is how our minds differ from people in 1900 when in the test room. And the intermediate causes, of course, are more cognitively demanding work roles, more cognitively demanding leisure, more formal schooling, and smaller families. © independent.co.uk
By Melissa Healy Los Angeles Times Measuring human intelligence may be controversial and oh-so-very-tricky to do. But like obscenity, we think we know it when we see it. A new study, however, demonstrates a more rigorous way to see and measure differences in intelligence between individuals. It finds that connectedness among the brain's disparate regions is a key factor that separates the plodding from the penetrating. As many researchers have long suspected, intelligence does have a "seat" in the human brain: an area just behind each of the temples called the lateral prefrontal cortex. But researchers writing in the journal Neuroscience found that human behavior that is exceptionally flexible, responsive and capable of navigating complexity requires something beyond a strong and active prefrontal cortex: strong and agile runners must link that seat to brain regions involved in perception, memory, language and mobility. The researchers estimate that the strength of those connections, as measured when subjects rested between mental tasks, explains about 10% of differences in intelligence among individuals. That makes this measure an even better predictor of intelligence than brain size -- a measure that scientists believe may explain about 7% of the variation in intelligence among individuals. To detect this relationship, the Neuroscience study compared functional magnetic resonance imaging (fMRI) brain scans of 78 men and women between 18 and 40 years old with those subjects' performance on tests of cognitive performance that required "fluid intelligence" and "cognitive control." Subjects, for instance, were asked to count backwards by, say, nine, or to watch a series of visual images and then indicate whether a single image shown had been among them. Copyright 2012
By Maria Konnikova It’s 1879, and psychology is just about to be born. The place: the University of Leipzig. The birth parent: Wilhelm Wundt. The deed: establishing the first official university laboratory for the study of psychology, an event taken by many as the line that marks unofficial explorations from empirical, accepted science. The laboratory has four rooms and a handful of students. By the early 1880s, it will grow to an astounding six rooms—and a total of 19 students. In 1883, it will award its first doctoral degree, to the first of Wundt’s advisees, Max Friedrich, on the topic of the time-course of individual psychological processes. That same year will see the publication of the first issue of the Journal Philosophische Studien, the first journal of experimental psychology, established—fittingly—by none other than Wundt. From that point on, the future of the discipline will be assured: psychology will survive, and perhaps even flourish, with the dawn of the new century. It will not be just another experiment gone wrong. That, at least, is the most straightforward story. It’s difficult to pinpoint a date for the birth of Psychology as such. That 1879 laboratory is but one contender, and Wundt, but one possible father. But just think of how many paved the way for Wundt’s achievements. Is it fair to call him the start, or is he rather more of a point of coalescence (if that)? And how far back must we go, if we’re to be really fair? © 2012 Scientific American,
Link ID: 17114 - Posted: 08.01.2012
by Michael Balter Many children (and adults) have heard Aesop's fable about the crow and the pitcher. A thirsty crow comes across a pitcher partly filled with water but can't reach the water with his beak. So he keeps dropping pebbles into the pitcher until the water level rises high enough. A new study finds that both young children and members of the crow family are good at solving this problem, but children appear to learn it in a very different ways from birds. Recent studies, particularly ones conducted by Nicola Clayton's experimental psychology group at the University of Cambridge in the United Kingdom have shown that members of the crow family are no birdbrains when it comes to cognitive abilities. They can make and use tools, plan for the future, and possibly even figure out what other birds are thinking, although that last claim is currently being debated. A few years ago, two members of Clayton's group showed that rooks can learn to drop stones into a water-filled tube to get at a worm floating on the surface. And last year, a team led by Clayton's graduate student Lucy Cheke reported similar experiments with Eurasian jays: Using three different experimental setups, Cheke and her colleagues found that the jays could solve the puzzle as long as the basic mechanism responsible for raising the water level was clear to the birds. To explore how learning in children might differ from rooks, jays, and other members of the highly intelligent crow family, Cheke teamed up with a fellow Clayton lab member, psychologist Elsa Loissel, to try the same three experiments on local schoolchildren aged 4 to 10 years. Eighty children were recruited for the experiments, which took place at their school with the permission of their parents. © 2010 American Association for the Advancement of Science
by Elizabeth Pennisi OTTAWA—With big brains comes big intelligence, or so the hypothesis goes. But there may be trade-offs as well. Humans and other creatures with large brains relative to their body size tend to have smaller guts and possibly fewer offspring. Scientists have debated for decades whether the two phenomena are related. Now a team of researchers says that they are—and that big brains do indeed make us smart. The finding comes thanks to an unusual experiment reported here yesterday at the Evolution Ottawa evolutionary biology meeting in which scientists shrank and grew the brains of guppies over several generations. "This is a real experimental result," says David Reznick, an evolutionary biologist at the University of California, Riverside, who was not involved in the study. "The earlier results were just correlations." Researchers first began to gather evidence that big brains were advantageous after 19th century U.S. biologist Hermon Bumpus examined the brains of sparrows, some of whom had succumbed in a blizzard and some of whom survived. The survivors had relatively larger brains. More recently, evolutionary biologist Alexei Maklakov from Uppsala University in Sweden found evidence that songbirds that colonize cities tend to have larger brains relative to their body size than species still confined to the countryside. The challenge of urban life might require bigger brains, he and his colleagues concluded last year in Biology Letters. Yet in humans and in certain electric fish, larger brain size seems to have trade-offs: smaller guts and fewer offspring. That's led some scientists to suggest there are constraints on how big brains can become because they are expensive to build and maintain. © 2010 American Association for the Advancement of Science.
Meredith Wadman Loretta, Ricky, Tiffany and Torian lead increasingly quiet lives, munching peppers and plums, perching and swinging in their 16-cubic-metre glass enclosures. They are the last four chimpanzees at Bioqual, a contract firm in Rockville, Maryland, that since 1986 has housed young chimpanzees for use by the nearby National Institutes of Health (NIH). Now an animal-advocacy group is demanding that the animals' roles as research subjects is brought to an end. Researchers at the NIH’s National Institute of Allergy and Infectious Diseases (NIAID) and the Food and Drug Administration have used the juvenile chimpanzees to study hepatitis C and malaria, as well as other causes of human infection, such as respiratory syncytial virus and norovirus. But now the NIH’s demand for ready access to chimpanzees is on the wane as the scientists who relied on them retire and social and political pressures against their use grow. The four remaining chimps are set to be returned soon to their owner, the New Iberia Research Center (NIRC) near Lafayette, Louisiana. “Much of what I have done over the past years has been research in chimps,” says Robert Purcell, 76, who heads the hepatitis viruses section at the NIAID’s Laboratory of Infectious Diseases. “It’s just a good time now [to retire] as the chimps are essentially no longer available.” Last December, a report from the US Institute of Medicine concluded that most chimpanzee research was scientifically unnecessary and recommended that the NIH sharply curtail its support. © 2012 Nature Publishing Group,
Keyword: Animal Rights
Link ID: 17010 - Posted: 07.09.2012
By Jason G. Goldman Yogi Bear always claimed that he was smarter than the average bear, but the average bear appears to be smarter than once thought. Psychologists Jennifer Vonk of Oakland University and Michael J. Beran of Georgia State University have taken a testing methodology commonly used for primates and shown not only that the methodology can be more widely used, but also that bears can distinguish among differing numerosities. Numerical cognition is perhaps the best understood of the core building blocks of the mind. Decades of research have provided evidence for the numerical abilities of gorillas, chimpanzees, rhesus, capuchin, and squirrel monkeys, lemurs, dolphins, elephants, birds, and fish. Pre-linguistic human infants share the same mental modules for representing and understanding numbers as those non-human animal species. Each of these species is able to precisely count sets of objects up to three, but after that, they can only approximate the number of items in a set. Even human adults living in cultures whose languages have not developed an explicit count list must rely on approximation rather than precision for quantities larger than three. For this reason, it is easier for infants and animals to distinguish thirty from sixty than it is to distinguish thirty from forty, since the 1:2 ratio (30:60) is smaller than the 3:4 ratio (30:40). As the ratios increase, the difference between the two sets becomes smaller, making it more difficult to discriminate between them without explicit counting. Given that species as divergent as humans and mosquitofish represent number in the same ways, subject to the same (quantity-based and ratio-based) limits and constraints, it stands to reason that the ability to distinguish among two quantities is evolutionarily-ancient. © 2012 Scientific American
by Moheb Costandi Researchers have yet to understand how genes influence intelligence, but a new study takes a step in that direction. An international team of scientists has identified a network of genes that may boost performance on IQ tests by building and insulating connections in the brain. Intelligence runs in families, but although scientists have identified about 20 genetic variants associated with intelligence, each accounts for just 1% of the variation in IQ scores. Because the effects of these genes on the brain are so subtle, neurologist Paul Thompson of the University of California, Los Angeles, devised a new large-scale strategy for tackling the problem. In 2009, he co-founded the ENIGMA Network, an international consortium of researchers who combine brain scanning and genetic data to study brain structure and function. Earlier this year, Thompson and his colleagues reported that they had identified genetic variants associated with head size and the volume of the hippocampus, a brain structure that is crucial for learning and memory. One of these variants was also weakly associated with intelligence. Those carrying it scored on average 1.29 points better on IQ tests than others, making it one of the strongest candidate intelligence genes so far. The researchers have now used the same strategy to identify more genetic variants associated with brain structure and IQ. In the new study, they analyzed brain images and whole-genome data from 472 Australians, including 85 pairs of identical twins, 100 pairs of nonidentical twins, and their nontwin siblings. They identified 24 genetic variations within six different genes, all of which were linked to differences in the structural integrity of major brain pathways. © 2010 American Association for the Advancement of Science
By JAMES GORMAN The extremes of animal behavior can be a source of endless astonishment. Books have been written about insect sex. The antics of dogs and cats are sometimes hard to believe. And birds, those amazing birds: They build elaborate nests, learn lyrical songs, migrate impossibly long distances. But “Gifts of the Crow,” by John N. Marzluff and Tony Angell, includes a description of one behavior that even Aesop never imagined. “On Kinkazan Island in northern Japan,” the authors write, “jungle crows pick up deer feces — dry pellets of dung — and deftly wedge them in the deer’s ears.” What!? I checked the notes at the back of the book, and this account comes from another book, written in Japanese. So I can’t give any more information on this astonishing claim, other than to say that Dr. Marzluff, of the University of Washington, and Mr. Angell, an artist and observer of birds, think that the crows do it in the spirit of fun. Deer droppings, it must be said, are only one of the crows’ gifts. The authors’ real focus is on the way that crows can give us “the ephemeral and profound connection to nature that many people crave.” To that end, however, they tell some wild anecdotes and make some surprising assertions. Many of the behaviors they describe — crows drinking beer and coffee, whistling and calling dogs and presenting gifts to people who feed them — are based on personal testimony and would seem to fall into the category of anecdote rather than science. © 2012 The New York Times Company
By Gary Stix “Superwoman has been rumbled,” declared a Daily Telegraph article in 2001 that chronicled how the human brain’s inability to “multitask” undercuts the prospects for a woman to juggle career and family with any measure of success. The brain as media icon has emerged repeatedly in recent years as new imaging techniques have proliferated—and, as a symbol, it seems to confuse as much as enlighten. The steady flow of new studies that purport to reduce human nature to a series of illuminated blobs on scanner images have fostered the illusion that a nouveau biological determinism has arrived. More often than not, a “neurobiological correlate”— tying together brain activity with a behavioral attribute (love, pain, aggression)—supplies the basis for a journal publication that translates instantly into a newspaper headline. The link between blob and behavior conveys an aura of versimilitude that often proves overly seductive to the reporter hard up to fill a health or science quota. A community of neuroscience bloggers, meanwhile, has taken on the responsibility of rectifying some of these misinterpretations. A study published last week by University College of London researchers—“Neuroscience in the Public Sphere”—tried to imbue this trend with more substance by quantifying and formally characterizing it. “Brain-based information possesses rhetorical power,” the investigators note. “Logically irrelevant neuroscience information [the result of the multitude of correlations that turn up] imbues an argument with authoritative, scientific credibility.” © 2012 Scientific American,
Link ID: 16754 - Posted: 05.05.2012
By Brian Alexander Good news for all those who ever had a teacher or a parent say “If you would just apply yourself you could learn anything! You’re only using 10 percent of your brain!” All those people were wrong. If we did use only 10 percent of our brains we’d be close to dead, according to Eric Chudler, director of the Center for Sensorimotor Neural Engineering at the University of Washington, who maintains an entertaining brain science website for kids. “When recordings are made from brain EEGs, or PET scans, or any type of brain scan, there’s no part of the brain just sitting there unused,” he said. Larry Squire, a research neuroscientist with the Veterans Administration hospital in San Diego, and at the University of California San Diego, pointed out that “any place the brain is damaged there is a consequence.” Damaged brains may have been where this myth originated. During the first half of the last century, a pioneering neuroscientist named Karl Lashley experimented on rodents by excising portions of their brains to see what happened. When he put these rodents in mazes they’d been trained to navigate, he found that animals with missing bits of brain often successfully navigated the mazes. This wound up being transmuted into the idea humans must be wasting vast brain potential. With the rise of the human potential movement in the 1960s, some preached that all sorts of powers, including bending spoons and psychic abilities, were laying dormant in our heads and that all we had to do was get off our duffs and activate them. © 2012 msnbc.com
Keyword: Brain imaging
Link ID: 16676 - Posted: 04.19.2012
by Andy Coghlan A massive genetics study relying on fMRI brain scans and DNA samples from over 20,000 people has revealed what is claimed as the biggest effect yet of a single gene on intelligence – although the effect is small. There is little dispute that genetics accounts for a large amount of the variation in people's intelligence, but studies have consistently failed to find any single genes that have a substantial impact. Instead, researchers typically find that hundreds of genes contribute. Following a brain study on an unprecedented scale, an international collaboration has now managed to tease out a single gene that does have a measurable effect on intelligence. But the effect – although measurable – is small: the gene alters IQ by just 1.29 points. According to some researchers, that essentially proves that intelligence relies on the action of a multitude of genes after all. "It seems like the biggest single-gene impact we know of that affects IQ," says Paul Thompson of the University of California, Los Angeles, who led the collaboration of 207 researchers. "But it's not a massive effect on IQ overall," he says. The variant is in a gene called HMGA2, which has previously been linked with people's height. At the site of the relevant mutation, the IQ difference depends on a change of a single DNA "letter" from C, standing for cytosine, to T, standing for thymine. © Copyright Reed Business Information Ltd.
Chris McManus, professor of psychology and medical education at University College London, responds: if by intelligent you mean someone who performs better on IQ tests, the simple answer is no. Studies in the U.K., U.S. and Australia have revealed that left-handed people differ from right-handers by only one IQ point, which is not noteworthy. If by intelligent you mean someone who performs better on IQ tests, the simple answer is no. Studies in the U.K., U.S. and Australia have revealed that left-handed people differ from right-handers by only one IQ point, which is not noteworthy. Left-handedness is, however, much more common among individuals with severe learning difficulties, such as mental retardation. A slightly higher proportion of left-handers have dyslexia or a stutter. Other problems, such as a higher rate of accidents reported in left-handers, mostly result from a world designed for the convenience of right-handers, with many tools not made for left-handed use. Although some people claim that a higher percentage of left-handers are exceptionally bright, large research studies do not support this idea. If by smarter you mean more talented in certain areas, left-handers may have an advantage. Left-handers’ brains are structured differently from right-handers’ in ways that can allow them to process language, spatial relations and emotions in more diverse and potentially creative ways. Also, a slightly larger number of left-handers than right-handers are especially gifted in music and math. A study of musicians in professional orchestras found a significantly greater proportion of talented left-handers, even among those who played instruments that seem designed for right-handers, such as violins. Similarly, studies of adolescents who took tests to assess mathematical giftedness found many more left-handers in the population. The fact that mathematicians are often musical may not be a coincidence. © 2012 Scientific American,
OUR intelligence, more than any particular behaviour or anatomical feature, is what distinguishes humans from the myriad other species with which we share our planet. It is a key factor in everything from our anatomy to our technology. To ask why we are intelligent is to ask why we are human; it admits no discrete answer. But let's ask it here anyway. Why are we, alone in nature, so smart? Perhaps we are not. Maybe our anthropocentric conceit prevents us from fully appreciating the intelligence of other animals, be they ants, cephalopods or cetaceans. As Douglas Adams put it: "Man had always assumed that he was more intelligent than dolphins because he had achieved so much - the wheel, New York, wars and so on - whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man - for precisely the same reasons." So let's rephrase the question. There is a cluster of abilities that seems unique to humans: language, tool use, culture and empathy. Other animals may have rudimentary forms of these abilities, but they do not approach humans' sophistication and flexibility. Why not? Some come closer than others. German psychologists say they have identified a chimp whose mental abilities far surpass those of its peers (see "Chimp prodigy shows signs of human-like intelligence"). Intriguingly, they go on to suggest that this might be because Natasha, the simian prodigy, exhibits strong social-reasoning skills, such as learning from others. These are the same skills to which the explosive development of human intelligence is increasingly attributed. © Copyright Reed Business Information Ltd.
By Melinda Wenner Moyer Is intelligence innate, or can you boost it with effort? The way you answer that question may determine how well you learn. Those who think smarts are malleable are more likely to bounce back from their mistakes and make fewer errors in the future, according to a study published last October in Psychological Science. Researchers at Michigan State University asked 25 undergraduate students to participate in a simple, repetitive computer task: they had to press a button whenever the letters that appeared on the screen conformed to a particular pattern. When they made a mistake, which happened about 9 percent of the time, the subjects realized it almost immediately—at which point their brain produced two tiny electrical responses that the researchers recorded using electrodes. The first reaction indicates awareness that a mistake was made, whereas the second, called error positivity, is believed to represent the desire to fix that slipup. Later, the researchers asked the students whether they believed intelligence was fixed or could be learned. Although everyone slowed down after erring, those who were “growth-minded”—that is, people who considered intelligence to be pliable—elicited stronger error-positivity responses than the other subjects. They subsequently made fewer mistakes, too. “Everybody says, ‘Oh, I did something wrong, I should slow down,’ but it was only the growth-minded individuals who actually did something with that information and made it better,” explains lead author Jason Moser, a clinical psychologist at Michigan State. © 2012 Scientific American,
By Eric Michael Johnson Americans take their rights seriously. But there is a lot of misunderstanding about what actually constitutes a ‘right.’ Religious believers are correct that they have a right to freely express their beliefs. This right is protected under the First Amendment to the US Constitution that prohibits Congress from making any “law respecting an establishment of religion, or prohibiting the free exercise thereof.” However, as a result, devout believers feel it is a violation of their rights when intelligent design creationism is forbidden in the classroom or when prayer during school sporting events is banned. After all, shouldn’t the First Amendment prohibit the government from interfering with this basic right? The answer is no and represents an important distinction when understanding what a right actually is. Because public schools are government-run institutions, allowing prayer during school activities or promoting religious doctrines in the classroom is a direct violation of the First Amendment. These activities infringe on the rights of those who do not share the same religious beliefs (or any at all). The key point is that rights are obligations that require governments to act in certain ways and refrain from acting in others. The First Amendment obligates the government to protect the rights of all citizens from an establishment of religion. You may have the right to freely exercise your beliefs, but that doesn’t give you the right to impose your views on others in public school. It was just this understanding of rights as obligations that governments must obey that formed the basis for a declaration of rights for cetaceans (whales and dolphins) at the annual meeting of the American Association for the Advancement of Science held in Vancouver, Canada last month. © 2012 Scientific American
Keyword: Animal Rights
Link ID: 16500 - Posted: 03.12.2012
How many neurons are there in the human brain? It was a question that scientists thought they had nailed – and the answer was 100bn (give or take). If you went looking you would find that figure repeated widely in the neuroscience literature and beyond. But when a researcher in Brazil called Dr Suzana Herculano-Houzel started digging, she discovered that no one in the field could actually remember where the 100bn figure had come from – let alone how it had been arrived at. So she set about discovering the true figure (HT to the excellent Nature neuroscience podcast NeuroPod). This involved a remarkable – and to some I suspect unsettling – piece of research. Her team took the brains of four adult men, aged 50, 51, 54 and 71, and turned them into what she describes as "brain soup". All of the men had died of non-neurological diseases and had donated their brains for research. "It took me a couple of months to make peace with this idea that I was going to take somebody's brain or an animal's brain and turn it into soup," she told Nature. "But the thing is we have been learning so much by this method we've been getting numbers that people had not been able to get … It's really just one more method that's not any worse than just chopping your brain into little pieces." She told me that so far, she has only looked at four brains, all of them from men. © 2012 Guardian News and Media Limited
Link ID: 16451 - Posted: 03.01.2012