Chapter 1. An Introduction to Brain and Behavior

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 928

By Gail Sullivan Chemicals found in food and common household products have been linked to lower IQ in kids exposed to high levels during pregnancy. Previous research linked higher exposure to chemicals called "phthalates" to poor mental and motor development in preschoolers. This study was said to be the first to report a link between prenatal exposure to the chemicals and childhood development. Researchers from Columbia University’s Mailman School of Public Health studied exposure to five types of phthalates, which are sometimes referred to as “hormone disruptors” or “endocrine disruptors.” Among these, di-n-butyl phthalate (DnBP) is used in shower curtains, raincoats, hairspray, food wraps, vinyl and pill coating, among other things — but according to the EPA, the largest source of exposure may be seafood. Di-isobutyl phthalate (DiBP) and Butylbenzyl phthalate (BBzP) are added to plastics to make them flexible. These chemicals may also used in makeup, nail polish, lacquer and explosives. The researchers linked prenatal exposure to phthalates to a more than six-point drop in IQ score compared with kids with less exposure. The study, “Persistent Associations between Maternal Prenatal Exposure to Phthalates on Child IQ at Age 7 Years," was published Wednesday in the journal PLOS One. "The magnitude of these IQ differences is troubling," one of the study’s authors, Robin Whyatt, said in a press release. "A six- or seven-point decline in IQ may have substantial consequences for academic achievement and occupational potential."

Keyword: Intelligence; Neurotoxins
Link ID: 20413 - Posted: 12.13.2014

By Anna North The idea that poverty can change the brain has gotten significant attention recently, and not just from those lay readers (a minority, according to recent research) who spend a lot of time thinking about neuroscience. Policy makers and others have begun to apply neuroscientific principles to their thinking about poverty — and some say this could end up harming poor people rather than helping. At The Conversation, the sociologist Susan Sered takes issue with “news reports with headlines like this one: ‘Can Brain Science Help Lift People Out Of Poverty?’” She’s referring to a June story by Rachel Zimmerman at WBUR, about a nonprofit called Crittenton Women’s Union that aims to use neuroscience to help get people out of poverty. Elisabeth Babcock, Crittenton’s chief executive, tells Ms. Zimmerman: “What the new brain science says is that the stresses created by living in poverty often work against us, make it harder for our brains to find the best solutions to our problems. This is a part of the reason why poverty is so ‘sticky.’” And, she adds: “If we’ve been raised in poverty under all this stress, our executive functioning wiring, the actual neurology of our brains, is built differently than if we’re not raised in poverty. It is built to react quickly to danger and threats and not built as much to plan or execute strategies for how we want things to be in the future because the future is so uncertain and planning is so pointless that this wiring isn’t as called for.” Dr. Sered, however, says that applying neuroscience to problems like poverty can sometimes lead to trouble: “Studies showing that trauma and poverty change people’s brains can too easily be read as scientific proof that poor people (albeit through no fault of their own) have inferior brains or that women who have been raped are now brain-damaged.” © 2014 The New York Times Company

Keyword: Development of the Brain; Brain imaging
Link ID: 20358 - Posted: 11.25.2014

By Neuroskeptic An attempt to replicate the results of some recent neuroscience papers that claimed to find correlations between human brain structure and behavior has drawn a blank. The new paper is by University of Amsterdam researchers Wouter Boekel and colleagues and it’s in press now at Cortex. You can download it here from the webpage of one of the authors, Eric-Jan Wagenmakers. Neuroskeptic readers will know Wagenmakers as a critic of statistical fallacies in psychology and a leading advocate of preregistration, which is something I never tire of promoting either. Boekel et al. attempted to replicate five different papers which, together, reported 17 distinct positive results in the form of structural brain-behavior (‘SBB’) correlations. An SBB correlation is an association between the size (usually) of a particular brain area and a particular behavioral trait. For instance, one of the claims was that the amount of grey matter in the amygdala is correlated with the number of Facebook friends you have. To attempt to reproduce these 17 findings, Boekel et al. took 36 students whose brains were scanned with two methods, structural MRI and DWI. The students then completed a set of questionnaires and psychological tests, identical to ones used in the five papers that were up for replication. The methods and statistical analyses were fully preregistered (back in June 2012); Boekel et al. therefore had no scope for ‘fishing’ for positive (or negative) results by tinkering with the methodology. So what did they find? Nothing much. None of the 17 brain-behavior correlations were significant in the replication sample.

Keyword: Brain imaging
Link ID: 20330 - Posted: 11.20.2014

By DENISE GRADY An electrical device glued to the scalp can slow cancer growth and prolong survival in people with the deadliest type of brain tumor, researchers reported on Saturday. The device is not a cure and, on average, adds only a few months of life when used along with the standard regimen of surgery, radiation and chemotherapy. Some doctors have questioned its usefulness. But scientists conducting a new study said the device was the first therapy in a decade to extend life in people with glioblastomas, brain tumors in which median survival is 15 months even with the best treatment. The disease affects about 10,000 people a year in the United States and is what killed Senator Edward M. Kennedy in 2009. It is so aggressive and hard to treat that even seemingly small gains in survival are considered important. The new findings mean the device should become part of the standard care offered to all patients with newly diagnosed glioblastomas, the researchers conducting the study said. The equipment consists of four pads carrying transducer arrays that patients glue to their scalps and change every few days. Wires lead to a six-pound operating system and power supply. Except for some scalp irritation, the device has no side effects, the study found. But patients have to wear it more or less around the clock and must keep their heads shaved. It generates alternating, low-intensity electrical fields — so-called tumor-treating fields — that can halt tumor growth by stopping cells from dividing, which leads to their death. The researchers said the technology might also help treat other cancers, and would be tested in mesothelioma and cancers of the lung, ovary, breast and pancreas. © 2014 The New York Times Company

Keyword: Miscellaneous
Link ID: 20319 - Posted: 11.17.2014

By Anna North Do you devour the latest neuroscience news, eager to learn more about how your brain works? Or do you click past it to something else, something more applicable to your life? If you’re in the latter camp, you may be in the majority. A new study suggests that many people just don’t pay that much attention to brain science, and its findings may raise a question: Is “neuro-literacy” really necessary? At Wired, Christian Jarrett writes, “It feels to me like interest in the brain has exploded.” He cites the prevalence of the word “brain” in headlines as well as “the emergence of new fields such as neuroleadership, neuroaesthetics and neuro-law.” But as a neuroscience writer, he notes, he may be “heavily biased” — and in fact, some research “suggests neuroscience has yet to make an impact on most people’s everyday lives.” For instance, he reports, Cliodhna O’Connor and Helene Joffe recently interviewed 48 Londoners about brain science for a paper published in the journal Science Communication. Anyone who thinks we live in an era of neuro-fixation may find the results a bit of a shock. Said one participant in the research: “Science of the brain? I haven’t a clue. Nothing at all. I’d be lying if I said there was.” Another: “Brain research I understand, an image of, I don’t know, a monkey or a dog with like the top of their head off and electrodes and stuff on their brain.” And another: “I might have seen it on the news or something, you know, some report of some description. But because they probably mentioned the word ‘science,’ or ‘We’re going to go now to our science correspondent Mr. Lala,’ that’s probably when I go, okay, it’s time for me to make a cup of tea.” According to the study authors, 71 percent of respondents “took pains to convey that neuroscience was not salient in their day-to-day life: it was ‘just not really on my radar.’” Some respondents associated brain research with scientists in white coats or with science classes (asked to free-associate about the term “brain research,” one respondent drew a mean-faced stick figure labeled “cross teacher”). And 42 percent saw science as something alien to them, removed from their own lives. © 2014 The New York Times Company

Keyword: Miscellaneous
Link ID: 20315 - Posted: 11.15.2014

By Agata Blaszczak-Boxe When it comes to lab animal welfare, rats and mice aren’t the only creatures of concern. In 2013, the European Union mandated that cephalopods—a group that includes octopuses and squid—be treated humanely when used for scientific research. In response, researchers have figured out how to anesthetize octopuses so the animals do not feel pain while being transported and handled during scientific experiments, for instance those examining their behavior, physiology, and neurobiology, as well as their use in aquaculture. In a study published online this month in the Journal of Aquatic Animal Health, researchers report immersing 10 specimens of the common octopus (Octopus vulgaris) in seawater with isoflurane, an anesthetic used in humans. They gradually increased the concentration of the substance from 0.5% to 2%. The investigators found that the animals lost the ability to respond to touch and their color paled, which means that their normal motor coordination of color regulation by the brain was lost, concluding that the animals were indeed anesthetized. The octopuses then recovered from the anesthesia within 40 to 60 minutes of being immersed in fresh seawater without the anesthetic, as they were able to respond to touch again and their color was back to normal. The researchers captured the anesthetization process on video, shown above. © 2014 American Association for the Advancement of Science.

Keyword: Animal Rights
Link ID: 20311 - Posted: 11.15.2014

By NICK BILTON Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence. The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable. Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent. Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control. In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry. But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease. Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Mr. Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.” © 2014 The New York Times Company

Keyword: Robotics; Intelligence
Link ID: 20283 - Posted: 11.06.2014

By Christian Jarrett It feels to me like interest in the brain has exploded. I’ve seen huge investments in brain science by the USA and Europe (the BRAIN Initiative and the Human Brain Project), I’ve read about the rise in media coverage of neuroscience, and above all, I’ve noticed how journalists and bloggers now often frame stories as being about the brain as opposed to the person. Look at these recent headlines: “Why your brain loves storytelling” (Harvard Business Review); “How Netflix is changing our brains” (Forbes); and “Why your brain wants to help one child in need — but not millions” (NPR). There are hundreds more, and in each case, the headline could be about “you” but the writer chooses to make it about “your brain”. Consider too the emergence of new fields such as neuroleadership, neuroaesthetics and neuro-law. It was only a matter of time before someone announced that we’re in the midst of a neurorevolution. In 2009 Zach Lynch did that, publishing his The Neuro Revolution: How Brain Science is Changing Our World. Having said all that, I’m conscious that my own perspective is heavily biased. I earn my living writing about neuroscience and psychology. I’m vigilant for all things brain. Maybe the research investment and brain-obsessed media headlines are largely irrelevant to the general public. I looked into this question recently and was surprised by what I found. There’s not a lot of research but that which exists (such as this, on the teen brain) suggests neuroscience has yet to make an impact on most people’s everyday lives. Indeed, I made Myth #20 in my new book Great Myths of the Brain “Neuroscience is transforming human self-understanding”. WIRED.com © 2014 Condé Nast.

Keyword: Attention
Link ID: 20282 - Posted: 11.06.2014

By Sarah C. P. Williams If you sailed through school with high grades and perfect test scores, you probably did it with traits beyond sheer smarts. A new study of more than 6000 pairs of twins finds that academic achievement is influenced by genes affecting motivation, personality, confidence, and dozens of other traits, in addition to those that shape intelligence. The results may lead to new ways to improve childhood education. “I think this is going to end up being a really classic paper in the literature,” says psychologist Lee Thompson of Case Western Reserve University in Cleveland, Ohio, who has studied the genetics of cognitive skills and who was not involved in the work. “It’s a really firm foundation from which we can build on.” Researchers have previously shown that a person’s IQ is highly influenced by genetic factors, and have even identified certain genes that play a role. They’ve also shown that performance in school has genetic factors. But it’s been unclear whether the same genes that influence IQ also influence grades and test scores. In the new study, researchers at King’s College London turned to a cohort of more than 11,000 pairs of both identical and nonidentical twins born in the United Kingdom between 1994 and 1996. Rather than focus solely on IQ, as many previous studies had, the scientists analyzed 83 different traits, which had been reported on questionnaires that the twins, at age 16, and their parents filled out. The traits ranged from measures of health and overall happiness to ratings of how much each teen liked school and how hard they worked. © 2014 American Association for the Advancement of Science

Keyword: Genes & Behavior; Intelligence
Link ID: 20170 - Posted: 10.07.2014

Fiona Fox Last week the UK Home Office published the findings of its investigations into allegations of animal suffering, made after undercover infiltrations at two animal research facilities. You will not find coverage of any of the conclusions in the national news media. Instead any search for media coverage will unearth the original infiltration stories under headlines such as: “Horrific video shows distress of puppies and kittens waiting to be dissected at animal testing lab”; “Graphic content: horrifying video shows puppies and kittens tested at UK laboratory”; and “Rats beheaded with scissors and kept in ‘pitiful state’.” These “shocking exposés”, brought to the newspapers by the animal rights group BUAV, include distressing images, links to videos that are difficult to watch, and quote allegedly secretly recorded researchers saying terrible things about the animals in their care. The newspapers seem in no doubt that the allegations they are carrying add up to “appalling suffering on a very large scale”, and appear to be proud of their role in bringing the abuses to light: “The Sunday Express today publishes details of an undercover investigation … that shines a light on the secret world of vivisection laboratories.” You may well see these articles as reassuring evidence that we still have public interest journalism in the UK. These animal rights supporters have done exactly what investigative journalists used to do in a time when newspapers had enough money to shine a light on the darker corners of our institutions and uncover hidden abuses. And you would be right, but for one thing: we now know that the stories were largely untrue. © 2014 Guardian News and Media Limited

Keyword: Animal Rights
Link ID: 20165 - Posted: 10.07.2014

By Gretchen Vogel Research on how the brain knows where it is has bagged the 2014 Nobel Prize in Physiology or Medicine, the Nobel Committee has announced from Stockholm. One half of the prize goes to John O'Keefe, director of the Sainsbury Wellcome Centre in Neural Circuits and Behaviour at University College London. The other is for a husband-wife couple: May-Britt Moser, who is director of the Centre for Neural Computation in Trondheim, and Edvard Moser, director of the Kavli Institute for Systems Neuroscience in Trondheim. "In 1971, John O´Keefe discovered the first component of this positioning system," the Nobel Committee says in a statement that was just released. "He found that a type of nerve cell in an area of the brain called the hippocampus that was always activated when a rat was at a certain place in a room. Other nerve cells were activated when the rat was at other places. O´Keefe concluded that these “place cells” formed a map of the room." "More than three decades later, in 2005, May‐Britt and Edvard Moser discovered another key component of the brain’s positioning system," the statement goes on to explain. "They identified another type of nerve cell, which they called “grid cells”, that generate a coordinate system and allow for precise positioning and pathfinding. Their subsequent research showed how place and grid cells make it possible to determine position and to navigate." © 2014 American Association for the Advancement of Science

Keyword: Learning & Memory
Link ID: 20163 - Posted: 10.06.2014

Alison Abbott The fact that Edvard and May-Britt Moser have collaborated for 30 years — and been married for 28 — has done nothing to dull their passion for the brain. They talk about it at breakfast. They discuss its finer points at their morning lab meeting. And at a local restaurant on a recent summer evening, they are still deep into a back-and-forth about how their own brains know where they are and will guide them home. “Just to walk there, we have to understand where we are now, where we want to go, when to turn and when to stop,” says May-Britt. “It's incredible that we are not permanently lost.” If anyone knows how we navigate home, it is the Mosers. They shot to fame in 2005 with their discovery of grid cells deep in the brains of rats. These intriguing cells, which are also present in humans, work much like the Global Positioning System, allowing animals to understand their location. The Mosers have since carved out a niche studying how grid cells interact with other specialized neurons to form what may be a complete navigation system that tells animals where they are going and where they have been. Studies of grid cells could help to explain how memories are formed, and why recalling events so often involves re-envisioning a place, such as a room, street or landscape. While pursuing their studies, the two scientists have become a phenomenon. Tall and good-looking, they operate like a single brain in two athletic bodies in their generously funded lab in Trondheim, Norway — a remote corner of northern Europe just 350 kilometres south of the Arctic Circle. They publish together and receive prizes as a single unit — most recently, the Nobel Prize in Physiology or Medicine, which they won this week with their former supervisor, neuroscientist John O’Keefe at University College London. In 2007, while still only in their mid-40s, they won a competition by the Kavli Foundation of Oxnard, California, to build and direct one of only 17 Kavli Institutes around the world. The Mosers are now minor celebrities in their home country, and their institute has become a magnet for other big thinkers in neuroscience. “It is definitely intellectually stimulating to be around them,” says neurobiologist Nachum Ulanovsky from the Weizmann Institute of Science in Rehovot, Israel, who visited the Trondheim institute for the first time in September. © 2014 Nature Publishing Grou

Keyword: Learning & Memory
Link ID: 20162 - Posted: 10.06.2014

By Glendon Mellow University and scientific research center programs are increasingly finding it useful to employ artists and illustrators to help them see things in a new way. Few works of art from the Renaissance have been studied and pored over as meticulously as Michelangelo’s frescoes in the Sistine Chapel. Yet, the Master may still have some surprises hidden for an illustrator-scientist. Biomedical Illustrator Ian Suk (BSc, BMC) and Neurological Surgeon Rafael Tamargo (MD, FACS), both of Johns Hopkins proposed in a 2010 article in the journal Neurosurgery, that the panel above, Dividing Light from the Darkness by Michelangelo actually depicts the brain stem of God. Using a series of comparisons of the unusual shadows and contours on God’s neck to photos of actual brain stems, the evidence seems completely overwhelming that Michelangelo used his own limited anatomical studies to depict the brain stem. It’s unlikely even the educated members of Michelangelo’s audience would recognize it. I encourage you to look over the paper here, and enlarge the images in the slideshow: Suk and Tamargo are utterly convincing. Unlike R. Douglas Fields in this previous blog post from 2010 on Scientific American, I don’t think there’s room to believe this is a case of pareidolia. I imagine the thrill of feeling Michelangelo communicating directly with the authors across the centuries was immense. © 2014 Scientific American,

Keyword: Brain imaging
Link ID: 20067 - Posted: 09.12.2014

By Sarah Zielinski The marshmallow test is pretty simple: Give a child a treat, such as a marshmallow, and promise that if he doesn’t eat it right away, he’ll soon be rewarded with a second one. The experiment was devised by Stanford psychologist Walter Mischel in the late 1960s as a measure of self-control. When he later checked back in with kids he had tested as preschoolers, those who had been able to wait for the second treat appeared to be doing better in life. They tended to have fewer behavioral or drug-abuse problems, for example, than those who had given in to temptation. Most attempts to perform this experiment on animals haven’t worked out so well. Many animals haven’t been willing to wait at all. Dogs, primates, and some birds have done a bit better, managing to wait at least a couple of minutes before eating the first treat. The best any animal has managed has been 10 minutes—a record set earlier this year by a couple of crows. The African grey parrot is a species known for its intelligence. Animal psychologist Irene Pepperberg, now at Harvard, spent 30 years studying one of these parrots, Alex, and showed that the bird had an extraordinary vocabulary and capacity for learning. Alex even learned to add numerals before his death in 2007. Could an African grey pass the marshmallow test? Adrienne E. Koepke of Hunter College and Suzanne L. Gray of Harvard University tried the experiment on Pepperberg’s current star African grey, a 19-year-old named Griffin. In their test, a researcher took two treats, one of which Griffin liked slightly better, and put them into cups. Then she placed the cup with the less preferred food in front of Griffin and told him, “wait.” She took the other cup and either stood a few feet away or left the room. After a random amount of time, from 10 seconds to 15 minutes, she would return. If the food was still in the cup, Griffin got the nut he was waiting for. Koepke and colleagues presented their findings last month at the Animal Behavior Society meeting at Princeton. © 2014 The Slate Group LLC.

Keyword: Intelligence; Evolution
Link ID: 20061 - Posted: 09.11.2014

Ewen Callaway Researchers found 69 genes that correlate with higher educational attainment — and three of those also also appear to have a direct link to slightly better cognitive abilities. Scientists looking for the genes underlying intelligence are in for a slog. One of the largest, most rigorous genetic study of human cognition1 has turned up inconclusive findings, and experts concede that they will probably need to scour the genomes of more than 1 million people to confidently identify even a small genetic influence on intelligence and other behavioural traits. Studies of twins have repeatedly confirmed a genetic basis for intelligence, personality and other aspects of behaviour. But efforts to link IQ to specific variations in DNA have led to a slew of irreproducible results. Critics have alleged that some of these studies' methods were marred by wishful thinking and shoddy statistics. A sobering editorial in the January 2012 issue of Behavior Genetics2 declared that “it now seems likely that many of the published findings of the last decade are wrong or misleading and have not contributed to real advances in knowledge”. In 2011, an international collaboration of researchers launched an effort to bring more rigour to studies of how genes contribute to behaviour. The group, called the Social Sciences Genetic Association Consortium, aimed to do studies using practices borrowed from the medical genetics community, which emphasizes large numbers of participants, rigorous statistics and reproducibility. In a 2013 study3 comparing the genomes of more than 126,000 people, the group identified three gene variants associated with with how many years of schooling a person had gone through or whether they had attended university. But the effect of these variants was small — each variant correlated with roughly one additional month of schooling in people who had it compared with people who did not. © 2014 Nature Publishing Group

Keyword: Intelligence; Genes & Behavior
Link ID: 20050 - Posted: 09.09.2014

By Jeffrey Mervis Embattled U.K. biomedical researchers are drawing some comfort from a new survey showing that a sizable majority of the public continues to support the use of animals in research. But there’s another twist that should interest social scientists as well: The government’s decision this year to field two almost identical surveys on the topic offers fresh evidence that the way you ask a question affects how people answer it. Since 1999, the U.K. Department for Business, Innovation & Skills (BIS) has been funding a survey of 1000 adults about their attitudes toward animal experimentation. But this year the government asked the London-based pollsters, Ipsos MORI, to carry out a new survey, changing the wording of several questions. (The company also collected additional information, including public attitudes toward different animal species and current rules regarding their use.) For example, the phrase “animal experimentation” was replaced by “animal research” because the latter is “less inflammatory,” notes Ipsos MORI Research Manager Jerry Latter. In addition, says Emma Brown, a BIS spokeswoman, the word research “more accurately reflects the range of procedures that animals may be involved in, including the breeding of genetically modified animals.” But government officials also value the information about long-term trends in public attitudes that can be gleaned from the current survey. So they told the company to conduct one last round—the 10th in the series—at the same time they deployed the new survey. Each survey went to a representative, but different, sample of U.K. adults. © 2014 American Association for the Advancement of Scienc

Keyword: Animal Rights
Link ID: 20041 - Posted: 09.06.2014

One of the best things about being a neuroscientist used to be the aura of mystery around it. It was once so mysterious that some people didn’t even know it was a thing. When I first went to university and people asked what I studied, they thought I was saying I was a “Euroscientist”, which is presumably someone who studies the science of Europe. I’d get weird questions such as “what do you think of Belgium?” and I’d have to admit that, in all honesty, I never think of Belgium. That’s how mysterious neuroscience was, once. Of course, you could say this confusion was due to my dense Welsh accent, or the fact that I only had the confidence to talk to strangers after consuming a fair amount of alcohol, but I prefer to go with the mystery. It’s not like that any more. Neuroscience is “mainstream” now, to the point where the press coverage of it can be studied extensively. When there’s such a thing as Neuromarketing (well, there isn’t actually such a thing, but there’s a whole industry that would claim otherwise), it’s impossible to maintain that neuroscience is “cool” or “edgy”. It’s a bad time for us neurohipsters (which are the same as regular hipsters, except the designer beards are on the frontal lobes rather than the jaw-line). One way that we professional neuroscientists could maintain our superiority was by correcting misconceptions about the brain, but lately even that avenue looks to be closing to us. The recent film Lucy is based on the most classic brain misconception: that we only use 10% of our brain. But it’s had a considerable amount of flack for this already, suggesting that many people are wise to this myth. We also saw the recent release of Susan Greenfield’s new book Mind Change, all about how technology is changing (damaging?) our brains. This is a worryingly evidence-free but very common claim by Greenfield. Depressingly common, as this blog has pointed out many times. But now even the non-neuroscientist reviewers aren’t buying her claims. © 2014 Guardian News and Media Limited

Keyword: Miscellaneous
Link ID: 20011 - Posted: 08.30.2014

Sam McDougle By now, perhaps you’ve seen the trailer for the new sci-fi thriller Lucy. It starts with a flurry of stylized special effects and Scarlett Johansson serving up a barrage of bad-guy beatings. Then comes Morgan Freeman, playing a professorial neuroscientist with the obligatory brown blazer, to deliver the film’s familiar premise to a full lecture hall: “It is estimated most human beings only use 10 percent of the brain’s capacity. Imagine if we could access 100 percent. Interesting things begin to happen.” Johansson as Lucy, who has been kidnapped and implanted with mysterious drugs, becomes a test case for those interesting things, which seem to include even more impressive beatings and apparently some kind of Matrix-esque time-warping skills. Of course, the idea that “you only use 10 percent of your brain” is, indeed, 100 hundred percent bogus. Why has this myth persisted for so long, and when is it finally going to die? Unfortunately, not any time soon. A survey last year by The Michael J. Fox Foundation for Parkinson's Research found that 65 percent of Americans believe the myth is true, 5 percent more than those who believe in evolution. Even Mythbusters, which declared the statistic a myth a few years ago, further muddied the waters: The show merely increased the erroneous 10 percent figure and implied, incorrectly, that people use 35 percent of their brains. The idea that swaths of the brain are stagnant pudding while one section does all the work is silly. Like most legends, the origin of this fiction is unclear, though there are some clues. © 2014 by The Atlantic Monthly Group

Keyword: Brain imaging
Link ID: 19848 - Posted: 07.17.2014

By GARY MARCUS ARE we ever going to figure out how the brain works? After decades of research, diseases like schizophrenia and Alzheimer’s still resist treatment. Despite countless investigations into serotonin and other neurotransmitters, there is still no method to cure clinical depression. And for all the excitement about brain-imaging techniques, the limitations of fMRI studies are, as evidenced by popular books like “Brainwashed” and “Neuromania,” by now well known. In spite of the many remarkable advances in neuroscience, you might get the sinking feeling that we are not always going about brain science in the best possible way. This feeling was given prominent public expression on Monday, when hundreds of neuroscientists from all over the world issued an indignant open letter to the European Commission, which is funding the Human Brain Project, an approximately $1.6 billion effort that aims to build a complete computer simulation of the human brain. The letter charges that the project is “overly narrow” in approach and not “well conceived.” While no neuroscientist doubts that a faithful-to-life brain simulation would ultimately be tremendously useful, some have called the project “radically premature.” The controversy serves as a reminder that we scientists are not only far from a comprehensive explanation of how the brain works; we’re also not even in agreement about the best way to study it, or what questions we should be asking. The European Commission, like the Obama administration, which is promoting a large-scale research enterprise called the Brain Initiative, is investing heavily in neuroscience, and rightly so. (A set of new tools such as optogenetics, which allows neuroscientists to control the activity of individual neurons, gives considerable reason for optimism.) But neither project has grappled sufficiently with a critical question that is too often ignored in the field: What would a good theory of the brain actually look like? Different kinds of sciences call for different kinds of theories. Physicists, for example, are searching for a “grand unified theory” that integrates gravity, electromagnetism and the strong and weak nuclear forces into a neat package of equations. Whether or not they will get there, they have made considerable progress, in part because they know what they are looking for. © 2014 The New York Times Company

Keyword: Brain imaging
Link ID: 19818 - Posted: 07.12.2014

By ALEX HALBERSTADT Dr. Vint Virga likes to arrive at a zoo several hours before it opens, when the sun is still in the trees and the lanes are quiet and the trash cans empty. Many of the animals haven’t yet slipped into their afternoon ma­laise, when they retreat, appearing to wait out the heat and the visitors and not do much of anything. Virga likes to creep to the edge of their enclosures and watch. He chooses a spot and tries not to vary it, he says, “to give the animals a sense of control.” Sometimes he watches an animal for hours, hardly moving. That’s because what to an average zoo visitor looks like frolicking or restlessness or even boredom looks to Virga like a lot more — looks, in fact, like a veritable Russian novel of truculence, joy, sociability, horniness, ire, protectiveness, deference, melancholy and even humor. The ability to interpret animal behavior, Virga says, is a function of temperament, curiosity and, mostly, decades of practice. It is not, it turns out, especially easy. Do you know what it means when an elephant lowers her head and folds her trunk underneath it? Or when a zebra wuffles, softly blowing air between her lips; or when a colobus monkey snuffles, sounding a little like a hog rooting in the mud; or when a red fox screams, sounding disconcertingly like an infant; or when red fox kits chatter at one another; or when an African wild dog licks and nibbles at the lips of another; or when a California sea lion resting on the water’s surface stretches a fore flipper and one or both rear flippers in the air, like a synchronized swimmer; or when a hippopotamus “dung showers” by defecating while rapidly flapping its tail? Virga knows, because it is his job to know. He is a behaviorist, and what he does, expressed plainly, is see into the inner lives of animals. The profession is an odd one: It is largely unregulated, and declaring that you are an expert is sometimes enough to be taken for one. Most behaviorists are former animal trainers; some come from other fields entirely. Virga happens to be a veterinarian, very likely the only one in the country whose full-time job is tending to the psychological welfare of animals in captivity. © 2014 The New York Times Company

Keyword: Animal Rights; Attention
Link ID: 19796 - Posted: 07.04.2014