Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Katherine Harmon Courage An infant's innate sense for numbers predicts how their mathematical aptitude will develop years later, a team of US researchers has found. Babies can spot if a set of objects increases or decreases in number — for instance, if the number of dots on a screen grows, even when dot size, colour and arrangement also change. But until recently, researchers could generally only determine the number sense of groups of babies, thus ruling out the ability to correlate this with later mathematics skills in individuals. In 2010, Elizabeth Brannon, a neuroscientist at Duke University in Durham, North Carolina, and her colleagues demonstrated that they could test and track infants' number sense over time1. To do this, six-month-old babies are presented with two screens. One shows a constant number of dots, such as eight, changing in appearance, and the other also shows changing dots but presents different numbers of them — eight sometimes and 16 other times, for instance. An infant who has a good primitive number sense will spend more time gazing at the screen that presents the changing number of dots. In the latest work, which is published in this week's Proceedings of the National Academy of Sciences2, Brannon's team took a group of 48 children who had been tested at six months of age and retested them three years later, using the same dot test but also other standard maths tests for preschoolers — including some that assessed the ability to count, to tell which of two numbers is larger and to do basic calculations. © 2013 Nature Publishing Group
By MAGGIE KOERTH-BAKER Between the fall of 2011 and the spring of 2012, people across the United States suddenly found themselves unable to get their hands on A.D.H.D. medication. Low-dose generics were particularly in short supply. There were several factors contributing to the shortage, but the main cause was that supply was suddenly being outpaced by demand. The number of diagnoses of Attention Deficit Hyperactivity Disorder has ballooned over the past few decades. Before the early 1990s, fewer than 5 percent of school-age kids were thought to have A.D.H.D. Earlier this year, data from the Centers for Disease Control and Prevention showed that 11 percent of children ages 4 to 17 had at some point received the diagnosis — and that doesn’t even include first-time diagnoses in adults. (Full disclosure: I’m one of them.) That amounts to millions of extra people receiving regular doses of stimulant drugs to keep neurological symptoms in check. For a lot of us, the diagnosis and subsequent treatments — both behavioral and pharmaceutical — have proved helpful. But still: Where did we all come from? Were that many Americans always pathologically hyperactive and unable to focus, and only now are getting the treatment they need? Probably not. Of the 6.4 million kids who have been given diagnoses of A.D.H.D., a large percentage are unlikely to have any kind of physiological difference that would make them more distractible than the average non-A.D.H.D. kid. It’s also doubtful that biological or environmental changes are making physiological differences more prevalent. Instead, the rapid increase in people with A.D.H.D. probably has more to do with sociological factors — changes in the way we school our children, in the way we interact with doctors and in what we expect from our kids. © 2013 The New York Times Company
by Helen Thomson ONE moment you are alive. The next you are dead. A few hours later and you are alive again. Pharmacologists have discovered a mechanism that triggers Cotard's syndrome – the mysterious condition that leaves people feeling like they, or parts of their body, no longer exist. With the ability to switch the so-called walking corpse syndrome on and off comes the prospect of new insights into how conscious experiences are constructed. Acyclovir – also known by the brand name Zovirax – is a common drug used to treat cold sores and other herpes infections. It usually has no harmful side effects. However, about 1 per cent of people who take the drug orally or intravenously experience some psychiatric side effects, including Cotard's. These occur mainly in people who have renal failure. To investigate the potential link between acyclovir and Cotard's, Anders Helldén at Karolinska University Hospital in Stockholm and Thomas Lindén at the Sahlgrenska Academy in Gothenburg pooled data from Swedish drug databases along with hospital admissions. They identified eight people with acyclovir-induced Cotard's. One woman with renal failure began using acyclovir to treat shingles. She ran into a hospital screaming, says Helldén. After an hour of dialysis, she started to talk: she said the reason she was so anxious was that she had a strong feeling she was dead. After a few more hours of dialysis she said, "I'm not quite sure whether I'm dead any more but I'm still feeling very strange." Four hours later: "I'm pretty sure I'm not dead any more but my left arm is definitely not mine." Within 24 hours, the symptoms had disappeared. © Copyright Reed Business Information Ltd.
Link ID: 18805 - Posted: 10.17.2013
by Colin Barras A part of all of us loves sums. Eavesdropping on the brain while people go about their daily activity has revealed the first brain cells specialised for numbers. Josef Parvizi and his colleagues at Stanford University in California enlisted the help of three people with epilepsy whose therapy involved placing a grid of electrodes on the surface of their brain that record activity. Neurons fired in a region called the intraparietal sulcus when the three volunteers performed arithmetic tests, suggesting they dealt with numbers. The team continued to monitor brain activity while the volunteers went about their normal activity in hospital. Comparing video footage of their stay with their brain activity (see video, above) revealed that the neurons remained virtually silent for most of the time, bursting into life only when the volunteers talked about numbers or numerical concepts such as "more than" or "less than". There is debate over whether some neural populations perform many functions or are involved in very precise tasks. "We show here that there is specialisation for numeracy," says Parvizi. Journal reference: Nature Communications, DOI: 10.1038/ncomms3528 © Copyright Reed Business Information Ltd.
Link ID: 18796 - Posted: 10.16.2013
by Nora Schultz A SIMPLE bedside scan could reveal an active mind hidden inside an unresponsive body. The method provides another tool for recognising consciousness in people who have been wrongly diagnosed as being in a vegetative state. Tests are also under way to use it to monitor people under general anaesthetic, to make sure they do not regain consciousness during an operation. The technique builds on recent research into the nature of consciousness. "Information that is processed consciously typically recruits several brain regions at once," says Jean-Rémi King at the Brain and Spine Institute (ICM) in Paris, France. Other information that enters the brain triggers unconscious activity – for instance, the righting reflex that helps us retain balance when we are pushed – and it only tends to activate one brain area. King and his colleague Jacobo Sitt, also at the ICM, reasoned that they could spot consciousness in people simply by playing them a series of beeps and then searching electroencephalogram (EEG) brain scan data for evidence that signals from different brain regions fluctuated in the same way as each other, suggesting that they were sharing information. They performed their tests on 75 people in a vegetative state, 67 minimally conscious people, 24 people who had recently regained consciousness after a coma, and 14 healthy controls. By running the EEG data through statistics software, the researchers found differences between the patterns from people who were fully conscious, those in a vegetative state, and those who were minimally conscious (Current Biology, doi.org/n42). © Copyright Reed Business Information Ltd.
Link ID: 18772 - Posted: 10.10.2013
Many people, I've heard talk, wonder what's going on inside Republican speaker John Boehner's brain. For cognitive neuroscientists, Boehner's brain is a case study. At the same time, others are frustrated with Democrat Harry Reid. The Senate Majority leader needs to take a tip from our founding fathers. Many of the intellectual giants who founded our democracy were both statesmen and scientists, and they applied the latest in scientific knowledge of their day to advantage in governing. The acoustics of the House of Representatives, now Statuary Hall, allowed John Quincy Adams and his comrades to eavesdrop on other members of congress conversing in whispers on the opposite side of the parabolic-shaped room. Senator Reid, in stark contrast, is still applying ancient techniques used when senators wore togas -- reason and argument -- and we all know how badly that turned out. The search for a path to compromise can be found in the latest research on the neurobiological basis of social behavior. Consider this new finding just published in the journal Brain Research. Oxytocin, a peptide produced in the hypothalamus of the brain and known to cement the strong bond between mother and child at birth, has been found to promote compromise in rivaling groups! This new research suggests that Congresswoman Nancy Pelosi could single-handedly end the Washington deadlock by spritzing a bit of oxytocin in her perfume and wafting it throughout the halls of congress. One can only imagine the loving effect this hormone would have on Senate Republican Ted Cruz, suddenly overwhelmed with an irresistible urge to bond with his colleagues, fawning for a cozy embrace like a babe cuddling in its mother's arms. And it is so simple! No stealthy spiking the opponent's coffee (or third martini at lunch) would be required, oxytocin works when it is inhaled through the nasal passages as an odorless vapor. © 2013 TheHuffingtonPost.com, Inc.
Link ID: 18761 - Posted: 10.08.2013
By Rebecca Lanning, Everywhere I went, people asked me about my son Will. They knew he’d graduated from high school, and they wanted to know what he was doing. Smiling politely, I told them that Will had been accepted to his first-choice college. But, I always added — in case someone saw him around town — that he had deferred enrollment. He was taking a gap year, I’d say. “So what’s your son doing with his windfall of free time? Traveling abroad? Doing research?” My cheeks burned as I played along, offering sound bites. A start-up venture. A film project. Independent study. Anything to avoid the truth: that my handsome, broad-shouldered son was, probably, at that very moment, home in bed with the shutters drawn, covers pulled over his head. Officially, Will was taking a gap year. But after 13 years of school, what he needed, what he’d earned, was a nap year. Will has long suffered from learning difficulties. It took years to pinpoint a diagnosis — and even when we did, figuring out how to manage it wasn’t easy. He needed a break. So did I. Will’s problems began to surface when he was in kindergarten. “He’s not where the other children are,” his teacher whispered to me one morning. I knew what she meant. Clumsy and slow to read, Will rested his head on his desk a lot. His written work, smudgy from excessive erasing, looked like bits of crumpled trash. School was torture for Will. He couldn’t take notes, failed to turn in homework, forgot when tests were coming up. Yet on standardized tests, his verbal scores consistently exceeded the 99th percentile. I wondered why he struggled, when clearly he was bright. © 1996-2013 The Washington Post
Figuring out the next 99,999,999,900 neurons “We have a hundred billion neurons in each human brain,” said Nicholas Spitzer, a neurobiologist and co-director of the Kavli Institute for Brain and Mind at the University of California-San Diego (which is partnering with The Atlantic on this event). “Right now, the best we can do is to record the electrical activity of maybe a few hundred of those neurons. Gee, that’s not very impressive.” Spitzer and his team are trying to figure out what’s going on in the rest of those neurons, or brain cells – specifically, what "jobs" they have in the body. But first, a bit of Neuroscience 101: “As your readers may know, the nerve cells or neurons in the brain communicate with each other through the release of chemicals, called neurotransmitters,” Spitzer said. “This allows a motor neuron that makes a muscle contract signal to the muscle to say, ‘time to contract.’ It seems like kind of a clumsy way to organize a signaling system.” But sometimes, those neurons change "jobs" – a motor neuron might start signaling another function in the body, for example. "These issues have their origins in the Greek and Roman and Chinese philosophers." “We thought for a long time that the wiring of the brain was a little bit like the wiring of some sort of electronic device in that the connection of the wires in the ‘device,’ the brain, are fairly fixed. What we’re finding is that the wires can remain in place, but the function of the circuit and the connection of the wires can change,” Spitzer said. “This is something of a heresy.” © 2013 by The Atlantic Monthly Group
By Roy F. Baumeister It has become fashionable to say that people have no free will. Many scientists cannot imagine how the idea of free will could be reconciled with the laws of physics and chemistry. Brain researchers say that the brain is just a bunch of nerve cells that fire as a direct result of chemical and electrical events, with no room for free will. Others note that people are unaware of some causes of their behavior, such as unconscious cues or genetic predispositions, and extrapolate to suggest that all behavior may be caused that way, so that conscious choosing is an illusion. Scientists take delight in (and advance their careers by) claiming to have disproved conventional wisdom, and so bashing free will is appealing. But their statements against free will can be misleading and are sometimes downright mistaken, as several thoughtful critics have pointed out. Arguments about free will are mostly semantic arguments about definitions. Most experts who deny free will are arguing against peculiar, unscientific versions of the idea, such as that “free will” means that causality is not involved. As my longtime friend and colleague John Bargh put it once in a debate, “Free will means freedom from causation.” Other scientists who argue against free will say that it means that a soul or other supernatural entity causes behavior, and not surprisingly they consider such explanations unscientific. These arguments leave untouched the meaning of free will that most people understand, which is consciously making choices about what to do in the absence of external coercion, and accepting responsibility for one’s actions. Hardly anyone denies that people engage in logical reasoning and self-control to make choices. There is a genuine psychological reality behind the idea of free will. The debate is merely about whether this reality deserves to be called free will. Setting aside the semantic debate, let’s try to understand what that underlying reality is. © 2013 The Slate Group, LLC.
Link ID: 18713 - Posted: 09.28.2013
By Bruce Bower Cartoon ghosts have scared up evidence that the ability to visualize objects in one’s mind materializes between ages 3 and 5. When asked to pick which of two mirror-image ghost cutouts or drawings fit in a ghost-shaped hole, few 3-year-olds, a substantial minority of 4-year-olds and most 5-year-olds regularly succeeded, say psychologist Andrea Frick of the University of Bern, Switzerland, and her colleagues. Girls performed as well as boys on the task, suggesting that men’s much-studied advantage over women in mental rotation doesn’t emerge until after age 5, the researchers report Sept. 17 in Cognitive Development. Mental rotation is a spatial skill regarded as essential for science and math achievement. Most tasks that researchers use to assess mental rotation skills involve pressing keys to indicate whether block patterns oriented at different angles are the same or different. That challenge overwhelms most preschoolers. Babies apparently distinguish block patterns from mirror images of those patterns (SN: 12/20/08, p. 8), but it’s unclear whether that ability enables mental rotation later in life. Frick’s team studied 20 children at each of three ages, with equal numbers of girls and boys. Youngsters saw two ghosts cut out of foam, each a mirror image of the other. Kids were asked to turn the ghosts in their heads and choose the one that would fit like a puzzle piece into a ghost’s outline on a board. Over seven trials, the ghosts were tilted at angles varying from the position of the outline. The researchers used three pairs of ghost cutouts, for a total of 21 trials. © Society for Science & the Public 2000 - 2013
By Keith Payne It’s tough to be the boss. A recent Wall Street Journal article described the plight of one CEO who had to drag himself out of bed each morning and muster his game face. It would be a long day of telling other people what to do. It got so bad, we are told, that he had no choice but to take a year off work to sail across the Atlantic Ocean with his family. Forbes agrees: “many CEOs have personal assistants who run their schedules for them, and they go from one meeting straight to another with barely a moment to go to the bathroom.” The indignity! And even worse than the bladder strain is having to fire people: “You may think a CEO can be detached when deciding who to lay off, but generally that couldn’t be farther from the truth. Having to make tough decisions about the people all around you can hit very hard.” Take heart, those of you who have lost your job in these turbulent economic times. At least you didn’t have to fire somebody. This type of silliness usually cites research from the 1950’s on “executive stress syndrome.” The research was not on executives, but rhesus monkeys. In a famous experiment, neuroscientist Joseph Brady subjected one group of monkeys to regular electric shocks every 20 seconds for six hour shifts. Another group of “executive monkeys” had the same schedule, except that they could prevent the shocks by pressing a lever in each 20 second period. The “executive monkeys” quickly learned to prevent the shocks by pressing the levers. This situation sounds awful for both monkeys, but decidedly worse for the monkeys with no escape. And yet, it was the “executive monkeys” with greater responsibility and control who started dropping dead from stomach ulcers. These results seemed to suggest that being responsible for making important decisions was so stressful that it posed a serious health risk. Executive stress syndrome was born. © 2013 Scientific American
By Neuroskeptic Neuroscientists are interested in how brains interact socially. One of the main topics of study is ‘mentalizing’ aka ‘theory of mind’, the ability to accurately attribute mental states – such as beliefs and emotions – to other people. It is widely believed that the brain has specific areas for this – i.e. social “modules” (although today most neuroscientists are shy about using that word, it’s basically what’s at issue.) But two new papers out this week suggest that people can still mentalize successfully after damage to “key parts of the theory of mind network”. Herbet et al, writing in Cortex, showed few effects of surgical removal of the right frontal lobe in 10 brain tumour patients. On two different mentalizing tasks, they showed that removal caused either no decline in performance, or only a transient one. Meanwhile Michel et al report that the left temporal pole is dispensable for mentalizing as well, in a single case report in the Journal of Cognitive Neuroscience. They describe a patient suffering from frontotemporal dementia (FTD), whose left temporal lobe was severely atrophied. He’d lost the use of language, but he did quite normally on theory of mind tests adapted to be non-linguistic. In both papers, these patients don’t have those parts of the brain that are most activated in fMRI studies of mentalizing. Where the blobs on the brain normally go, they have no brain.
Link ID: 18687 - Posted: 09.23.2013
Joseph Brean U.S. President Barack Obama’s much-hyped BRAIN initiative to crack the mysteries of consciousness via a finely detailed map of the brain in action took its first big step this week, with the release of a strategy report that foresees “revolutionary advances” in the $100-million effort to “crack the brain’s code,” perhaps in as little as “a few years.” “We stand on the verge of a great journey into the unknown,” the report says, explicitly comparing BRAIN to the Apollo moon shot, and predicting it will “change human society forever.” As a grand challenge, Apollo was an unambiguous success, despite the vast expense and human costs, but there is a growing sense among scientists, if not legacy-minded politicians, that the road ahead for modern neuroscience will be pocked with disappointment, with more impenetrable mysteries than solvable problems. As the world approaches what some are calling “peak neuro,” after three decades of over-hyped “brain porn,” the optimistic hope is that Mr. Obama’s BRAIN project will lead to a detailed and dynamic map of the brain, and thus reveal both how it works and how it fails in such diseases as Alzheimer’s or autism. The pessimistic fear, however, is that the “speed of thought,” as Mr. Obama described it, is just too quick for our current brain imaging technologies, primarily functional magnetic resonance imaging (fMRI). As the anonymous blogger Neuroskeptic, a British brain scientist who tracks the misinterpretation of brain scan studies by both scientists and media, put it in an email, “there’s just as much hype and misrepresentation as ever.” The more we learn about the brain, the less we seem to know. With its potential overstated and its aspirations presented as foregone conclusions, the relatively new field of neuroscience is in a period of self-reflection, said Jackie Sullivan, a philosopher of neuroscience at Western University in London Ont. “The vast majority of neuroscientists are well aware that the goals going forward need to be more modest,” she said. © 2013 National Post
by Andy Coghlan Parts of the brain may still be alive after a person's brain activity is said to have flatlined. When someone is in a deep coma, their brain activity can go silent. An electroencephalogram measuring this activity may eventually show a flatline, usually taken as a sign of brain death. However, while monitoring a patient who had been placed in a deep coma to prevent seizures following a cardiac arrest, Bogdan Florea, a physician at the Regina Maria Medical Centre in Cluj-Napoca, Romania, noticed a strange thing – some tiny intermittent bursts of activity were interrupting an otherwise flatline signal, each lasting a few seconds. He asked Florin Amzica of the University of Montreal in Canada and his colleagues to investigate what might be happening. To imitate what happened in the patient, Amzica's team put cats into a deep coma using a high dose of anaesthesia. While EEG recordings taken from the surface of the brain – the cortex – showed a flatline, recordings from deep-brain electrodes revealed tiny bursts of activity originating in the hippocampus, responsible for memory and learning, which spread within minutes to the cortex. "These ripples build up a synchrony that rises in a crescendo to reach a threshold where they can spread beyond the hippocampus and trigger activity in the cortex," says Amzica. © Copyright Reed Business Information Ltd.
Link ID: 18683 - Posted: 09.21.2013
By Philip Yam The harvest moon is almost upon us—specifically, September 19. It’s the full moon closest to the autumnal equinox, and it has deep significance in our cultural histories. Namely, it enabled our ancestral farmers to toil longer in the fields. (Today, electricity enables us to toil longer in the office—thanks, Tom Edison.) One enduring belief is that the harvest moon is bigger and brighter than any other full moon. That myth is probably the result of the well-known illusion in which the moon looks bigger on the horizon than it does overhead. Back when I was taking psych 101, my professor explained that the moon illusion was simply a function of having reference objects on the horizon. But then I saw this TED-Ed video by Andrew Vanden Heuvel. It turns out that the explanation from my college days really isn’t sufficient to explain the illusion. In fact, scientists really aren’t sure, and there is much debate. Check it out and see what you think. © 2013 Scientific American
By Jay Van Bavel and Dominic Packer On the heels of the decade of the brain and the development of neuroimaging, it is nearly impossible to open a science magazine or walk through a bookstore without encountering images of the human brain. As prominent neuroscientist, Martha Farah, remarked “Brain images are the scientific icon of our age, replacing Bohr’s planetary atom as the symbol of science”. The rapid rise to prominence of cognitive neuroscience has been accompanied by an equally swift rise in practitioners and snake oil salesmen who make promises that neuroimaging cannot yet deliver. Critics inside and outside of the discipline have both been swift to condemn sloppy claims that MRI can tell us who we plan to vote for, if we love our iPhones, and why we believe in God. Yet, the constant parade of overtrumped results has lead to the rise of “The new neuro-skeptics” who argue that neuroscience is either unable to answer the interesting questions, or worse, that scientists have simply been seduced by the flickering lights of the brain. The notion that MRI images have attained an undue influence over scientists, granting agencies, and the public gained traction in 2008 when psychologists David McCabe and Alan Castel published a paper showing that brain images could be used to deceive. In a series of experiments, they found that Colorado State University undergraduates rated descriptions of scientific studies higher in scientific reasoning if they were accompanied by a 3-D image of the brain (see Figure), rather than a mere bar graph or a topographic map of brain activity on the scalp (presumably from electroencephalography). © 2013 Scientific American
By Melissa Hogenboom Science reporter, BBC News Smaller animals tend to perceive time in slow-motion, a new study has shown. This means that they can observe movement on a finer timescale than bigger creatures, allowing them to escape from larger predators. Insects and small birds, for example, can see more information in one second than a larger animal such as an elephant. The work is published in the journal Animal Behaviour. "The ability to perceive time on very small scales may be the difference between life and death for fast-moving organisms such as predators and their prey," said lead author Kevin Healy, at Trinity College Dublin (TCD), Ireland. The reverse was found in bigger animals which may miss things that smaller creatures can rapidly spot. In humans, too, there is variation among individuals. Athletes, for example, can often process visual information more quickly. An experienced goalkeeper would therefore be quicker than others in observing where a ball comes from. The speed at which humans absorb visual information is also age-related, said Andrew Jackson, a co-author of the work at TCD. "Younger people can react more quickly than older people, and this ability falls off further with increasing age." The team looked at the variation of time perception across a variety of animals. They gathered datasets from other teams who had used a technique called critical flicker fusion frequency, which measures the speed at which the eye can process light. BBC © 2013
By Josh Shaffer DURHAM It’s not often that the high-minded world of neuroscience collides with the corny, old-fashioned art of ventriloquism. One depends on dummies; the other excludes them. But a Duke University study uses puppet-based comedy to demonstrate the complicated inner workings of the brain and shows what every ventriloquist knows: The eye is more convincing than the ear. The study, which appears in the journal PLOS ONE, seeks to explain how the brain combines information coming from two different senses. How, asks Duke psychology and neuroscience professor Jennifer Groh, does the brain determine where a sound is coming from? In your eyes, the retina takes a snapshot, she said. It makes a topographic image of what’s in front of you. But the ears have nothing concrete to go on. They have to rely on how loud the sound is, how far away and from what direction. That’s where a ventriloquist comes in, providing a model for this problem. With a puppet, the noise and the movement are coming from different places. So how does the brain fix this and choose where to look? Duke researchers tested their hypotheses on 11 people and two monkeys, placing them in a soundproof booth.
Inner-ear problems could be a cause of hyperactive behaviour, research suggests. A study on mice, published in Science, said such problems caused changes in the brain that led to hyperactivity. It could lead to the development of new targets for behaviour disorder treatments, the US team says. A UK expert said the study's findings were "intriguing" and should be investigated further. Behavioural problems such as ADHD are usually thought to originate in the brain. But scientists have observed that children and teenagers with inner-ear disorders - especially those that affect hearing and balance - often have behavioural problems. However, no causal link has been found. The researchers in this study suggest inner-ear disorders lead to problems in the brain which then also affect behaviour. The team from the Albert Einstein College of Medicine of Yeshiva University in New York noticed some mice in the lab were particularly active - constantly chasing their tails. They were found to be profoundly deaf and have disorders of the inner ear - of both the cochlea, which is responsible for hearing, and the vestibular system, which is responsible for balance. The researchers found a mutation in the Slc12a2 gene, also found in humans. Blocking the gene's activity in the inner ears of healthy mice caused them to become increasingly active. BBC © 2013
by Adam Gopnik Good myths turn on simple pairs— God and Lucifer, Sun and Moon, Jerry and George—and so an author who makes a vital duo is rewarded with a long-lived audience. No one in 1900 would have thought it possible that a century later more people would read Conan Doyle’s Holmes and Watson stories than anything of George Meredith’s, but we do. And so Gene Roddenberry’s “Star Trek,” despite the silly plots and the cardboard-seeming sets, persists in its many versions because it captures a deep and abiding divide. Mr. Spock speaks for the rational, analytic self who assumes that the mind is a mechanism and that everything it does is logical, Captain Kirk for the belief that what governs our life is not only irrational but inexplicable, and the better for being so. The division has had new energy in our time: we care most about a person who is like a thinking machine at a moment when we have begun to have machines that think. Captain Kirk, meanwhile, is not only a Romantic, like so many other heroes, but a Romantic on a starship in a vacuum in deep space. When your entire body is every day dissolved, reënergized, and sent down to a new planet, and you still believe in the ineffable human spirit, you have really earned the right to be a soul man. Writers on the brain and the mind tend to divide into Spocks and Kirks, either embracing the idea that consciousness can be located in a web of brain tissue or debunking it. For the past decade, at least, the Spocks have been running the Enterprise: there are books on your brain and music, books on your brain and storytelling, books that tell you why your brain makes you want to join the Army, and books that explain why you wish that Bar Refaeli were in the barracks with you. The neurological turn has become what the “cultural” turn was a few decades ago: the all-purpose non-explanation explanation of everything. Thirty years ago, you could feel loftily significant by attaching the word “culture” to anything you wanted to inspect: we didn’t live in a violent country, we lived in a “culture of violence”; we didn’t have sharp political differences, we lived in a “culture of complaint”; and so on. In those days, Time, taking up the American pursuit of pleasure, praised Christopher Lasch’s “The Culture of Narcissism”; now Time has a cover story on happiness and asks whether we are “hardwired” to pursue it. © 2013 Condé Nast.