Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By AMY HARMON Some neuroscientists believe it may be possible, within a century or so, for our minds to continue to function after death — in a computer or some other kind of simulation. Others say it’s theoretically impossible, or impossibly far off in the future. A lot of pieces have to fall into place before we can even begin to start thinking about testing the idea. But new high-tech efforts to understand the brain are also generating methods that make those pieces seem, if not exactly imminent, then at least a bit more plausible. Here’s a look at how close, and far, we are to some requirements for this version of “mind uploading.” The hope of mind uploading rests on the premise that much of the key information about who we are is stored in the unique pattern of connections between our neurons, the cells that carry electrical and chemical signals through living brains. You wouldn't know it from the outside, but there are more of those connections — individually called synapses, collectively known as the connectome — in a cubic centimeter of the human brain than there are stars in the Milky Way galaxy. The basic blueprint is dictated by our genes, but everything we do and experience alters it, creating a physical record of all the things that make us US — our habits, tastes, memories, and so on. It is exceedingly tricky to transition that pattern of connections, known as the connectome, into a state where it is both safe from decay and can be verified as intact. But in recent months, two sets of scientists said they had devised separate ways to do that for the brains of smaller mammals. If either is scaled up to work for human brains — still a big if — then theoretically your brain could sit on a shelf or in a freezer for centuries while scientists work on the rest of these steps. © 2015 The New York Times Company
Neel V. Patel The concept of the insanity defense dates back to ancient Greece and the Roman Empire. The idea has always been the same: Protect individuals from being held accountable for behavior they couldn’t control. Yet there have been more than a few historical and recent instances of a judge or jury issuing a controversial “by reason of…” verdict. What was intended as a human rights effort has become a last-ditch way to save killers (though it didn’t work for James Holmes). The question that hangs in the air at these sort of proceedings has always been the same: Is there a way to make determinations more scientific and less traditionally judicial? Adam Shniderman, a criminal justice researcher at Texas Christian University, has been studying the role of neuroscience in the court system for several years now. He explains that neurological data and explanations don’t easily translate into the world of lawyers and legal text. Inverse spoke with Shniderman to learn more about how neuroscience is used in today’s insanity defenses, and whether this is likely to change as the technology used to observe the brain gets better and better. Can you give me a quick overview of how the role of neuroscience in the courts, has changed over the years? Especially in the last few decades with new advances in technology. Obviously, [neuroscientific evidence] has become more widely used as brain-scanning technology has gotten better. Some of the scanning technology we use now, like functional MRI that measures blood oxygenation as a proxy for neurological activity, is relatively new within the last 20 years or so. The nature of brain scanning has changed, but the knowledge that the brain influences someone’s actions is not new.
By Steve Mirsky It's nice to know that the great man we celebrate in this special issue had a warm sense of humor. For example, in 1943 Albert Einstein received a letter from a junior high school student who mentioned that her math class was challenging. He wrote back, “Do not worry about your difficulties in mathematics; I can assure you that mine are still greater.” Today we know that his sentiment could also have been directed at crows, which are better at math than those members of various congressional committees that deal with science who refuse to acknowledge that global temperatures keep getting higher. Studies show that crows can easily discriminate between a group of, say, three objects and another containing nine. They have more trouble telling apart groups that are almost the same size, but unlike the aforementioned committee members, at least they're trying. A study in the Proceedings of the National Academy of Sciences USA finds that the brain of a crow has nerve cells that specialize in determining numbers—a method quite similar to what goes on in our primate brain. Human and crow brains are substantially different in size and organization, but convergent evolution seems to have decided that this kind of neuron-controlled numeracy is a good system. (Crows are probably unaware of evolution, which is excusable. Some members of various congressional committees that deal with science pad their reactionary résumés by not accepting evolution, which is astonishing.) © 2015 Scientific American
Mo Costandi In an infamous set of experiments performed in the 1960s, psychologist Walter Mischel sat pre-school kids at a table, one by one, and placed a sweet treat – a small marshmallow, a biscuit, or a pretzel – in front of them. Each of the young participants was told that they would be left alone in the room, and that if they could resist the temptation to eat the sweet on the table in front of them, they would be rewarded with more sweets when the experimenter returned. The so-called Marshmallow Test was designed to test self-control and delayed gratification. Mischel and his colleagues tracked some of the children as they grew up, and then claimed that those who managed to hold out for longer in the original experiment performed better at school, and went on to become more successful in life, than those who couldn’t resist the temptation to eat the treat before the researcher returned to the room. The ability to exercise willpower and inhibit impulsive behaviours is considered to be a core feature of the brain’s executive functions, a set of neural processes - including attention, reasoning, and working memory - which regulate our behaviour and thoughts, and enable us to adapt them according to the changing demands of the task at hand. Executive function is a rather vague term, and we still don’t know much about its underlying bran mechanisms, or about how different components of this control system are related to one another. New research shows that self-control and memory share, and compete with each other for, the same brain mechanisms, such that exercising willpower saps these common resources and impairs our ability to encode memories. © 2015 Guardian News and Media Limited
Shankar Vedantam Girls often outperform boys in science and math at an early age but are less likely to choose tough courses in high school. An Israeli experiment demonstrates how biases of teachers affect students. RENEE MONTAGNE, HOST: At early ages, girls often outperform boys in math and science classes. Later, something changes. By the time they get into high school, girls are less likely than boys to take difficult math courses and less likely, again, to go into careers in science, technology, engineering or medicine. To learn more about this, David Greene spoke with NPR social science correspondent Shankar Vedantam. SHANKAR VEDANTAM, BYLINE: Well, the new study suggests, David, that some of these outcomes might be driven by the unconscious biases of elementary school teachers. What's remarkable about the new work is it doesn't just theorize about the gender gap, it actually has very hard evidence. Edith Sand at Tel Aviv University and her colleague, Victor Lavy, analyzed the math test scores of about 3,000 students in Tel Aviv. When the students were in sixth grade, the researchers got two sets of math test scores. One set of scores were given by the classroom teachers, who obviously knew the children whom they were grading. The second set of scores were from external teachers who did not know if the children they were grading were either boys or girls. So the external teachers were blind to the gender of the children. © 2015 NPR
By LISA FELDMAN BARRETT Boston — IS psychology in the midst of a research crisis? An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments and found that over 60 percent of them failed to replicate — that is, their findings did not hold up the second time around. The results, published last week in Science, have generated alarm (and in some cases, confirmed suspicions) that the field of psychology is in poor shape. But the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works. Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate. Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test. A number of years ago, for example, scientists conducted an experiment on fruit flies that appeared to identify the gene responsible for curly wings. The results looked solid in the tidy confines of the lab, but out in the messy reality of nature, where temperatures and humidity varied widely, the gene turned out not to reliably have this effect. In a simplistic sense, the experiment “failed to replicate.” But in a grander sense, as the evolutionary biologist Richard Lewontin has noted, “failures” like this helped teach biologists that a single gene produces different characteristics and behaviors, depending on the context. © 2015 The New York Times Company
Link ID: 21369 - Posted: 09.01.2015
By James Gallagher Health editor, BBC News website Close your eyes and imagine walking along a sandy beach and then gazing over the horizon as the Sun rises. How clear is the image that springs to mind? Most people can readily conjure images inside their head - known as their mind's eye. But this year scientists have described a condition, aphantasia, in which some people are unable to visualise mental images. Niel Kenmuir, from Lancaster, has always had a blind mind's eye. He knew he was different even in childhood. "My stepfather, when I couldn't sleep, told me to count sheep, and he explained what he meant, I tried to do it and I couldn't," he says. "I couldn't see any sheep jumping over fences, there was nothing to count." Our memories are often tied up in images, think back to a wedding or first day at school. As a result, Niel admits, some aspects of his memory are "terrible", but he is very good at remembering facts. And, like others with aphantasia, he struggles to recognise faces. Yet he does not see aphantasia as a disability, but simply a different way of experiencing life. Take the aphantasia test It is impossible to see what someone else is picturing inside their head. Psychologists use the Vividness of Visual Imagery Questionnaire, which asks you to rate different mental images, to test the strength of the mind's eye. The University of Exeter has developed an abridged version that lets you see how your mind compares. © 2015 BBC.
By Elizabeth Kolbert C57BL/6J mice are black, with pink ears and long pink tails. Inbred for the purposes of experimentation, they exhibit a number of infelicitous traits, including a susceptibility to obesity, a taste for morphine, and a tendency to nibble off their cage mates’ hair. They’re also tipplers. Given access to ethanol, C57BL/6J mice routinely suck away until the point that, were they to get behind the wheel of a Stuart Little-size roadster, they’d get pulled over for D.U.I. Not long ago, a team of researchers at Temple University decided to take advantage of C57BL/6Js’ bad habits to test a hunch. They gathered eighty-six mice and placed them in Plexiglas cages, either singly or in groups of three. Then they spiked the water with ethanol and videotaped the results. Half of the test mice were four weeks old, which, in murine terms, qualifies them as adolescents. The other half were twelve-week-old adults. When the researchers watched the videos, they found that the youngsters had, on average, outdrunk their elders. More striking still was the pattern of consumption. Young male C57BL/6Js who were alone drank roughly the same amount as adult males. But adolescent males with cage mates went on a bender; they spent, on average, twice as much time drinking as solo boy mice and about thirty per cent more time than solo girls. The researchers published the results in the journal Developmental Science. In their paper, they noted that it was “not possible” to conduct a similar study on human adolescents, owing to the obvious ethical concerns. But, of course, similar experiments are performed all the time, under far less controlled circumstances. Just ask any college dean. Or ask a teen-ager.
By Simon Worrall, National Geographic How do we know we exist? What is the self? These are some of the questions science writer Anil Ananthaswamy asks in his thought-provoking new book, The Man Who Wasn’t There: Investigations Into the Strange New Science of the Self. The answers, he says, may lie in medical conditions like Cotard’s syndrome, Alzheimer’s or body integrity identity disorder, which causes some people to try and amputate their own limbs. Speaking from Berkeley, California, he explains why Antarctic explorer Ernest Shackleton fell victim to the doppelgänger effect; how neuroscience is rewriting our ideas about identity; and how a song by George Harrison of the Beatles offers a critique of the Western view of the self. You dedicate the book to “those of us who want to let go but wonder, who is letting go and of what?” Explain that statement. We always hear within popular culture that we have to “let go,” as a way of dealing with certain situations in our lives. And in some sense you have to wonder about that statement because the person or thing doing the letting go is also probably what has to be let go. In the book, I am trying to get behind the whole issue of what the self is that has to do the letting go; and what aspects of the self have to be let go of. You start your book with Alzheimer’s. Tell us about the origin of the condition and what it tells us about “the autobiographical self.” Alzheimer’s is a very severe condition, especially during the mid- to late stages, which starts robbing people of their ability to remember anything that’s happening to them. They also start forgetting the people they are close to. © 1996-2015 National Geographic Society
Link ID: 21343 - Posted: 08.27.2015
By NINA STROHMINGER and SHAUN NICHOLS WHEN does the deterioration of your brain rob you of your identity, and when does it not? Alzheimer’s, the neurodegenerative disease that erodes old memories and the ability to form new ones, has a reputation as a ruthless plunderer of selfhood. People with the disease may no longer seem like themselves. Neurodegenerative diseases that target the motor system, like amyotrophic lateral sclerosis, can lead to equally devastating consequences: difficulty moving, walking, speaking and eventually, swallowing and breathing. Yet they do not seem to threaten the fabric of selfhood in quite the same way. Memory, it seems, is central to identity. And indeed, many philosophers and psychologists have supposed as much. This idea is intuitive enough, for what captures our personal trajectory through life better than the vault of our recollections? But maybe this conventional wisdom is wrong. After all, the array of cognitive faculties affected by neurodegenerative diseases is vast: language, emotion, visual processing, personality, intelligence, moral behavior. Perhaps some of these play a role in securing a person’s identity. The challenge in trying in determine what parts of the mind contribute to personal identity is that each neurodegenerative disease can affect many cognitive systems, with the exact constellation of symptoms manifesting differently from one patient to the next. For instance, some Alzheimer’s patients experience only memory loss, whereas others also experience personality change or impaired visual recognition. The only way to tease apart which changes render someone unrecognizable is to compare all such symptoms, across multiple diseases. And that’s just what we did, in a study published this month in Psychological Science. © 2015 The New York Times Company
Link ID: 21331 - Posted: 08.24.2015
Helen Thomson Modafinil is the world’s first safe “smart drug”, researchers at Harvard and Oxford universities have said, after performing a comprehensive review of the drug. They concluded that the drug, which is prescribed for narcolepsy but is increasingly taken without prescription by healthy people, can improve decision- making, problem-solving and possibly even make people think more creatively. While acknowledging that there was limited information available on the effects of long-term use, the reviewers said that the drug appeared safe to take in the short term, with few side effects and no addictive qualities. Modafinil has become increasingly common in universities across Britain and the US. Prescribed in the UK as Provigil, it was licensed in 2002 for use as a treatment for narcolepsy - a brain disorder that can cause a person to suddenly fall asleep at inappropriate times or to experience chronic pervasive sleepiness and fatigue. Used without prescription, and bought through easy-to-find websites, modafinil is what is known as a smart drug - used primarily by people wanting to improve their focus before an exam. A poll of Nature journal readers suggested that one in five have used drugs to improve focus, with 44% stating modafinil as their drug of choice. But despite its increasing popularity, there has been little consensus on the extent of modafinil’s effects in healthy, non-sleep-disordered humans. A new review of 24 of the most recent modafinil studies suggests that the drug has many positive effects in healthy people, including enhancing attention, improving learning and memory and increasing something called “fluid intelligence” - essentially our capacity to solve problems and think creatively. © 2015 Guardian News and Media Limited
By PAUL GLIMCHER and MICHAEL A. LIVERMORE THE United States government recently announced an $18.7 billion settlement of claims against the oil giant BP in connection with the Deepwater Horizon oil rig explosion in April 2010, which dumped millions of barrels of oil into the Gulf of Mexico. Though some of the settlement funds are to compensate the region for economic harm, most will go to environmental restoration in affected states. Is BP getting off easy, or being unfairly penalized? This is not easy to answer. Assigning a monetary value to environmental harm is notoriously tricky. There is, after all, no market for intact ecosystems or endangered species. We don’t reveal how much we value these things in a consumer context, as goods or services for which we will or won’t pay a certain amount. Instead, we value them for their mere existence. And it is not obvious how to put a price tag on that. In an attempt to do so, economists and policy makers often rely on a technique called “contingent valuation,” which amounts to asking individuals survey questions about their willingness to pay to protect natural resources. The values generated by contingent valuation studies are frequently used to inform public policy and litigation. (If the government had gone to trial with BP, it most likely would have relied on such studies to argue for a large judgment against the company.) Contingent valuation has always aroused skepticism. Oil companies, unsurprisingly, have criticized the technique. But many economists have also been skeptical, worrying that hypothetical questions posed to ordinary citizens may not really capture their genuine sense of environmental value. Even the Obama administration seems to discount contingent valuation, choosing to exclude data from this technique in 2014 when issuing a new rule to reduce the number of fish killed by power plants. © 2015 The New York Times Company
By John Danaher Discoveries in neuroscience, and the science of behaviour more generally, pose a challenge to the existence of free will. But this all depends on what is meant by ‘free will’. The term means different things to different people. Philosophers focus on two conditions that seem to be necessary for free will: (i) the alternativism condition, according to which having free will requires the ability to do otherwise; and (ii) the sourcehood condition, according to which having free will requires that you (your ‘self’) be the source of your actions. A scientific and deterministic worldview is often said to threaten the first condition. Does it also threaten the second? That is what Christian List and Peter Menzies article “My brain made me do it: The exclusion argument against free will and what’s wrong with it” tries to figure out. As you might guess from the title, the authors think that the scientific worldview, in particular the advances in neuroscience, do not necessarily threaten the sourcehood condition. I discussed their main argument in the previous post. To briefly recap, they critiqued an argument from physicalism against free will. According to this argument, the mental states which constitute the self do not cause our behaviour because they are epiphenomenal: they supervene on the physical brain states that do all the causal work. List and Menzies disputed this by appealing to a difference-making account of causation. This allowed for the possibility of mental states causing behaviour (being the ‘difference makers’) even if they were supervenient upon underlying physical states.
Link ID: 21279 - Posted: 08.10.2015
April Dembosky Developers of a new video game for your brain say theirs is more than just another get-smarter-quick scheme. Akili, a Northern California startup, insists on taking the game through a full battery of clinical trials so it can get approval from the Food and Drug Administration — a process that will take lots of money and several years. So why would a game designer go to all that trouble when there's already a robust market of consumers ready to buy games that claim to make you smarter and improve your memory? Think about all the ads you've heard for brain games. Maybe you've even passed a store selling them. There's one at the mall in downtown San Francisco — just past the cream puff stand and across from Jamba Juice — staffed on my visit by a guy named Dominic Firpo. "I'm a brain coach here at Marbles: The Brain Store," he says. Brain coach? "Sounds better than sales person," Firpo explains. "We have to learn all 200 games in here and become great sales people so we can help enrich peoples' minds." He heads to the "Word and Memory" section of the store and points to one product that says it will improve your focus and reduce stress in just three minutes a day. "We sold out of it within the first month of when we got it," Firpo says. The market for these "brain fitness" games is worth about $1 billion and is expected to grow to $6 billion in the next five years. Game makers appeal to both the young and the older with the common claim that if you exercise your memory, you'll be able to think faster and be less forgetful. Maybe bump up your IQ a few points. "That's absurd," says psychology professor Randall Engle from the Georgia Institute of Technology. © 2015 NPR
By John Danaher Consider the following passage from Ian McEwan’s novel Atonement. It concerns one of the novel’s characters (Briony) as she philosophically reflects on the mystery of human action: She raised one hand and flexed its fingers and wondered, as she had sometimes done before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. Is Briony’s quest forlorn? Will she ever find herself at the crest of the wave? The contemporary scientific understanding of human action seems to cast this into some doubt. A variety of studies in the neuroscience of action paint an increasingly mechanistic and subconscious picture of human behaviour. According to these studies, our behaviour is not the product of our intentions or desires or anything like that. It is the product of our neural networks and systems, a complex soup of electrochemical interactions, oftentimes operating beneath our conscious awareness. In other words, our brains control our actions; our selves (in the philosophically important sense of the word ‘self’) do not. This discovery — that our brains ‘make us do it’ and that ‘we’ don’t — is thought to have a number of significant social implications, particularly for our practices of blame and punishment.
Link ID: 21276 - Posted: 08.08.2015
By Ariana Eunjung Cha Children who suffer an injury to the brain -- even a minor one -- are more likely to experience attention issues, according to a study published Monday in the journal Pediatrics. The effects may not be immediate and could occur long after the incident. Study author Marsh Konigs, a doctoral candidate at VU University Amsterdam, described the impact as "very short lapses in focus, causing children to be slower." Researchers looked at 113 children, ages six to 13, who suffered from traumatic brain injuries (TBIs) ranging from a concussion that gave them a headache or caused them to vomit, to losing consciousness for more than 30 minutes, and compared them with a group of 53 children who experienced a trauma that was not head-related. About 18 months after the children's accidents, parents and teachers were asked to rate their attention and other indicators of their health. They found that those with TBI had more lapses in attention and other issues, such as anxiety, a tendency to internalize their problems and slower processing speed. Based on studies of adults who experienced attention issues after suffering from a brain injury, doctors have theorized for years that head injuries in children might be followed by a "secondary attention deficit hyperactivity disorder." This study appears to confirm that association.
by Anil Ananthaswamy Science journalist Anil Ananthaswamy thinks a lot about "self" — not necessarily himself, but the role the brain plays in our notions of self and existence. In his new book, The Man Who Wasn't There, Ananthaswamy examines the ways people think of themselves and how those perceptions can be distorted by brain conditions, such as Alzheimer's disease, Cotard's syndrome and body integrity identity disorder, or BIID, a psychological condition in which a patient perceives that a body part is not his own. Ananthaswamy tells Fresh Air's Terry Gross about a patient with BIID who became so convinced that a healthy leg wasn't his own that he eventually underwent an amputation of the limb. "Within 12 hours, this patient that I saw, he was sitting up and there was no regret. He really seemed fine with having given up his leg," Ananthaswamy says. Ultimately, Ananthaswamy says, our sense of self is a layered one, which pulls information from varying parts of the brain to create a sense of narrative self, bodily self and spiritual self: "What it comes down to is this sense we have of being someone or something to which things are happening. It's there when we wake up in the morning, it kind of disappears when we go to sleep, it reappears in our dreams, and it's also this sense we have of being an entity that spans time." Interview Highlights On how to define "self" When you ask someone, "Who are you?" you're most likely to get a kind of narrative answer, "I am so-and-so, I'm a father, I'm son." They are going to tell you a kind of story they have in their heads about themselves, the story that they tell to themselves and to others, and in some sense that's what can be called the narrative self. ... © 2015 NPR
Link ID: 21235 - Posted: 07.29.2015
By Ariana Eunjung Cha The Defense Advanced Research Projects Agency funds a lot of weird stuff, and in recent years more and more of it has been about the brain. Its signature work in this field is in brain-computer interfaces and goes back several decades to its Biocybernetics program, which sought to enable direct communication between humans and machines. In 2013, DARPA made headlines when it announced that it intended to spend more than $70 million over five years to take its research to the next level by developing an implant that could help restore function or memory in people with neuropsychiatric issues. Less known is DARPA's Narrative Networks (or N2) project which aims to better understand how stories — or narratives — influence human behavior and to develop a set of tools that can help facilitate faster and better communication of information. "Narratives exert a powerful influence on human thoughts, emotions and behavior and can be particularly important in security contexts," DARPA researchers explained in a paper published in the Journal of Neuroscience Methods in April. They added that "in conflict resolution and counterterrorism scenarios, detecting the neural response underlying empathy induced by stories is of critical importance." This is where the work on the Hitchcock movies comes in. Researchers at the Georgia Institute of Technology recruited undergraduates to be hooked up to MRI machines and watch movie clips that were roughly three minutes long. The excerpts all featured a character facing a potential negative outcome and were taken from suspenseful movies, including three Alfred Hitchcock flicks as well as "Alien," "Misery," "Munich" and "Cliffhanger," among others.
By Roni Caryn Rabin “Fat” cartoon characters may lead children to eat more junk food, new research suggests, but there are ways to counter this effect. The findings underscore how cartoon characters, ubiquitous in children’s books, movies, television, video games, fast-food menus and graphic novels, may influence children’s behavior in unforeseen ways, especially when it comes to eating. Researchers first randomly showed 60 eighth graders a svelte jelly-bean-like cartoon character or a similar rotund character and asked them to comment on the images. Then they thanked them and gestured toward bowls of Starburst candies and Hershey’s Kisses, saying, “You can take some candy.” Children who had seen the rotund cartoon character helped themselves to more than double the number of candies as children shown the lean character, taking 3.8 candies on average, compared with 1.7 taken by children shown the lean bean character. (Children in a comparison group shown an image of a coffee mug took 1.5 candies on average.) But activating children’s existing health knowledge can counter these effects, the researchers discovered. In a separate experiment, they showed 167 elementary school children two red Gumby-like cartoon characters, one fat and one thin, and then asked them to “taste test” some cookies. But they also asked the children to “think about things that make you healthy,” such as getting enough sleep versus watching TV, or drinking soda versus milk. Some children were asked the health questions before being given the cookie taste test, while others were asked the questions after the taste test. Remarkably, the children who were asked about healthy habits before doing the taste test ate fewer cookies — even if they had first been exposed to the rotund cartoon character. Those who were shown the rotund figure ate 4.2 cookies on average if they were asked about healthy habits after eating the cookies, compared to three cookies if they were asked about healthy habits before doing the taste test. Children who saw the normal weight character and who were asked about healthy habits after the taste test also ate about three cookies. © 2015 The New York Times Company
By Neuroskeptic According to British biochemist Donald R. Forsdyke in a new paper in Biological Theory, the existence of people who seem to be missing most of their brain tissue calls into question some of the “cherished assumptions” of neuroscience. I’m not so sure. Forsdyke discusses the disease called hydrocephalus (‘water on the brain’). Some people who suffer from this condition as children are cured thanks to prompt treatment. Remarkably, in some cases, these post-hydrocephalics turn out to have grossly abnormal brain structure: huge swathes of their brain tissue are missing, replaced by fluid. Even more remarkably, in some cases, these people have normal intelligence and display no obvious symptoms, despite their brains being mostly water. This phenomenon was first noted by a British pediatrician called John Lorber. Lorber never published his observations in a scientific journal, although a documentary was made about them. However, his work was famously discussed in Science in 1980 by Lewin in an article called “Is Your Brain Really Necessary?“. There have been a number of other more recent published cases. Forsdyke argues that such cases pose a problem for mainstream neuroscience. If a post-hydrocephalic brain can store the same amount of information as a normal brain, he says, then “brain size does not scale with information quantity”, therefore, “it would seem timely to look anew at possible ways our brains might store their information.”