Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 163

Tim Adams Henry Marsh made the decision to become a neurosurgeon after he had witnessed his three-month-old son survive the complex removal of a brain tumour. For two decades he was the senior consultant in the Atkinson Morley wing at St George’s hospital in London, one of the country’s largest specialist brain surgery units. He pioneered techniques in operating on the brain under local anaesthetic and was the subject of the BBC documentary Your Life in Their Hands. His first book, Do No Harm: Stories of Life, Death, and Brain Surgery, was published in 2014 to great acclaim, and became a bestseller across the world. Marsh retired from full-time work at St George’s in 2015, though he continues with long-standing surgical roles at hospitals in the Ukraine and Nepal. He is also an avid carpenter. Earlier this year he published a second volume of memoir, Admissions: a Life in Brain Surgery, in which he looks back on his career as he takes up a “retirement project” of renovating a decrepit lock-keeper’s cottage near where he grew up in Oxfordshire. He lives with his second wife, the social anthropologist and author Kate Fox. They have homes in Oxford, and in south London, which is where the following conversation took place. Have you officially retired now? Well, I still do one day a week for the NHS, though apparently they want a “business case” for it, so I’m not getting paid at present. Yes, well, people talk about the mind-matter problem – it’s not a problem for me: mind is matter. That’s not being reductionist. It is actually elevating matter. We don’t even begin to understand how electrochemistry and nerve cells generate thought and feeling. We have not the first idea. The relation of neurosurgery to neuroscience is a bit like the relationship between plumbing and quantum mechanics.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23842 - Posted: 07.17.2017

By Anil Ananthaswamy To understand human consciousness, we need to know why it exists in the first place. New experimental evidence suggests it may have evolved to help us learn and adapt to changing circumstances far more rapidly and effectively. We used to think consciousness was a uniquely human trait, but neuroscientists now believe we share it with many other animals, including mammals, birds and octopuses. While plants and arguably some animals like jellyfish seem able to respond to the world around them without any conscious awareness, many other animals consciously experience and perceive their environment. Read more: Why be conscious – The improbable origins of our unique mind In the 19th century, Thomas Henry Huxley and others argued that such consciousness is an “epiphenomenon” – a side effect of the workings of the brain that has no causal influence, the way a steam whistle has no effect on the way a steam engine works. More recently, neuroscientists have suggested that consciousness enables us to integrate information from different senses or keep such information active for long enough in the brain that we can experience the sight and sound of car passing by, for example, as one unified perception, even though sound and light travel at different speeds. © Copyright New Scientist Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 23785 - Posted: 06.28.2017

Kerin Higa After surgery to treat her epilepsy severed the connection between the two halves of her brain, Karen's left hand took on a mind of its own, acting against her will to undress or even to slap her. Amazing, to be sure. But what may be even more amazing is that most people who have split-brain surgery don't notice anything different at all. But there's more to the story than that. In the 1960s, a young neuroscientist named Michael Gazzaniga began a series of experiments with split-brain patients that would change our understanding of the human brain forever. Working in the lab of Roger Sperry, who later won a Nobel Prize for his work, Gazzaniga discovered that the two halves of the brain experience the world quite differently. When Gazzaniga and his colleagues flashed a picture in front of a patient's right eye, the information was processed in the left side of the brain and the split-brain patient could easily describe the scene verbally. But when a picture was flashed in front of the left eye, which connects to the right side of the brain, the patient would report seeing nothing. If allowed to respond nonverbally, however, the right brain could adeptly point at or draw what was seen by the left eye. So the right brain knew what it was seeing; it just couldn't talk about it. These experiments showed for the first time that each brain hemisphere has specialized tasks. In this third episode of Invisibilia, hosts Alix Spiegel and Hanna Rosin talk to several people who are trying to change their other self, including a man who confronts his own biases and a woman who has a rare condition that causes one of her hands to take on a personality of its own. © 2017 npr

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23749 - Posted: 06.17.2017

By Helen Thomson People in a minimally conscious state have been “woken” for a whole week after a brief period of brain stimulation. The breakthrough suggests we may be on the verge of creating a device that can be used at home to help people with disorders of consciousness communicate with friends and family. People with severe brain trauma can fall into a coma. If they begin to show signs of arousal but not awareness, they are said to be in a vegetative state. If they then show fluctuating signs of awareness but cannot communicate, they are described as being minimally consciousness. In 2014, Steven Laureys at the University of Liège in Belgium and his colleagues discovered that 13 people with minimal consciousness and two people in a vegetative state could temporarily show new signs of awareness when given mild electrical stimulation. The people in the trial received transcranial direct current stimulation (tDCS), which uses low-level electrical stimulation to make neurons more or less likely to fire. This was applied once over an area of the brain called the prefrontal cortex, which is involved in “higher” cognitive functions such as consciousness. Soon after, they showed signs of consciousness, including moving their hands or following instructions using their eyes. Two people were even able to answer questions for 2 hours by moving their body, before drifting back into their previous state. © Copyright New Scientist Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23610 - Posted: 05.13.2017

By Christof Koch | Imagine you are an astronaut, untethered from your safety line, adrift in space. Your damaged radio lets you hear mission control's repeated attempts to contact you, but your increasingly desperate cries of “I'm here, I'm here” go unacknowledged—you are unable to signal that you're alive but injured. After days and weeks of fruitless pleas from your loved ones, their messages cease. You become lost to the world. How long do you keep your sanity when you are locked in your own echo chamber? Days? Months? Years? This nightmarish scenario is vividly described by British neuroscientist Adrian Owen in his upcoming book Into the Gray Zone (Scribner). Taking my evening bath while dipping into its opening pages, I only put the book down after finishing hours later, with the water cold. The story of communicating with the most impaired neurological patients at a greater distance from us than an astronaut lost in space is told by Owen in a most captivating manner. A professor at Western University in Ontario, Canada, Owen pioneered brain-imaging technology to establish what islands of awareness persist in patients with severe disorders of consciousness. These people are bedridden and seriously disabled, unable to speak or otherwise articulate their mental state following traumatic brain injury, encephalitis, meningitis, stroke, or drug or alcohol intoxication. Two broad groups can be distinguished among those who do not quickly succumb to their injuries. Vegetative state patients, in the first group, cycle in and out of sleep. When they are awake, their eyes are open, but attempts to establish bedside communications with them—“if you hear me, squeeze my hand or look down”—meet only with failure. These patients can move their eyes or head, swallow and yawn but never in an intentional manner. Nothing is left but surviving brain stem reflexes. With proper nursing care to avoid bedsores and infections, these individuals can live for years. © 2017 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23563 - Posted: 05.02.2017

By Neuroskeptic In a thought-provoking new paper called What are neural correlates neural correlates of?, NYU sociologist Gabriel Abend argues that neuroscientists need to pay more attention to philosophy, social science, and the humanities. Abend’s main argument is that if we are to study the neural correlates or neural basis of a certain phenomenon, we must first define that phenomenon and know how to identify instances of it. Sometimes, this identification is straightforward: in a study of brain responses to the taste of sugar, say, there is little room for confusion because we all agree what sugar is. However, if a neuroscientist wants to study the neural correlates of, say, love, they will need to decide what love is, and this is something that philosophers and others have been debating for a long time. Abend argues that cognitive neuroscientists “cannot avoid taking sides in philosophical and social science controversies” in studying phenomena, such as love or morality, which have no neutral, universally accepted definition. In choosing a particular set of stimuli in order to experimentally evoke something, neuroscientists are aligning themselves with a certain theory of what that thing is. For example, the field of “moral neuroscience” makes heavy use of a family of hypothetical dilemmas called trolley problems. The classic trolley problem asks us to choose between allowing a runaway trolley to hit and kill five people, or throwing one person in front of the trolley, killing them but saving the other five.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23501 - Posted: 04.18.2017

By John Horgan I’m writing a book on the mind-body problem, and one theme is that mind-theorists’ views are shaped by emotionally traumatic experiences, like mental illness, the death of a child and the breakup of a marriage. David Chalmers is a striking counter-example. He seems remarkably well adjusted and rational, especially for a philosopher. I’ve tracked his career since I heard him call consciousness “the hard problem” in 1994. Although I often disagree with him—about, for example, whether information theory can help solve consciousness—I’ve always found him an admirably clear thinker, who doesn’t oversell his ideas (unlike Daniel Dennett when he insists that consciousness is an “illusion”). Just in the last couple of years, Chalmers's writings, talks and meetings have helped me understand integrated information theory, Bayesian brains, ethical implications of artificial intelligence and philosophy’s lack of progress, among other topics. Last year I interviewed Chalmers at his home in a woody suburb of New York City. My major takeaway: Although he has faith that consciousness can be scientifically solved, Chalmers doesn’t think we’re close to a final theory, and if we find such a theory, consciousness might remain as philosophically confusing as, say, quantum mechanics. In other words, Chalmers is a philosophical hybrid, who fuses optimism with mysterianism, the position that consciousness is intractable. Below are edited excerpts from our conversation. Chalmers, now 50, was born and raised in Australia. His parents split up when he was five. “My father is a medical researcher, a pretty successful scientist and administrator in medicine in Australia… My mother is I would say a spiritual thinker.” “So if you want an historical story, I guess I end up halfway between my father and mother… My father is a reductionist, and my mother is very much a non-reductionist. I’m a non-reductionist with a tolerance for ideas that might look a bit crazy to some people, like the idea that there’s consciousness everywhere, consciousness is not reducible to something physical. That said, the tradition I’m working in is very much in the western scientific and analytic tradition.” © 2017 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23499 - Posted: 04.17.2017

Marcelo Gleiser The idea that neuroscience is rediscovering the soul is, to most scientists and philosophers, nothing short of outrageous. Of course it is not. But the widespread, adverse, knee-jerk attitude presupposes the old-fashioned definition of the soul — the ethereal, immaterial entity that somehow encapsulates your essence. Surely, this kind of supernatural mumbo-jumbo has no place in modern science. And I agree. The Cartesian separation of body and soul, the res extensa (matter stuff) vs. res cogitans (mind stuff) has long been discarded as untenable in a strictly materialistic description of natural phenomena. After all, how would something immaterial interact with something material without any exchange of energy? And how would something immaterial — whatever that means — somehow maintain the essence of who you are beyond your bodily existence? So, this kind of immaterial soul really presents problems for science, although, as pointed out here recently by Adam Frank, the scientific understanding of matter is not without its challenges. But what if we revisit the definition of soul, abandoning its canonical meaning as the "spiritual or immaterial part of a human being or animal, regarded as immortal" for something more modern? What if we consider your soul as the sum total of your neurocognitive essence, your very specific brain signature, the unique neuronal connections, synapses, and flow of neurotransmitters that makes you you? © 2017 npr

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23454 - Posted: 04.06.2017

Bruce Bower Kids can have virtual out-of-body experiences as early as age 6. Oddly enough, the ability to inhabit a virtual avatar signals a budding sense that one’s self is located in one’s own body, researchers say. Grade-schoolers were stroked on their backs with a stick while viewing virtual versions of themselves undergoing the same touch. Just after the session ended, the children often reported that they had felt like the virtual body was their actual body, says psychologist Dorothy Cowie of Durham University in England. This sense of being a self in a body, which can be virtually manipulated via sight and touch, gets stronger and more nuanced throughout childhood, the scientists report March 22 in Developmental Science. By around age 10, individuals start to report feeling the touch of a stick stroking a virtual body, denoting a growing integration of sensations with the experience of body ownership, Cowie’s team finds. A year after that, youngsters still don’t display all the elements of identifying self with body observed in adults. During virtual reality trials, only adults perceived their actual bodies as physically moving through space toward virtual bodies receiving synchronized touches. This first-of-its-kind study opens the way to studying how a sense of self develops from childhood on, says cognitive neuroscientist Olaf Blanke of the Swiss Federal Institute of Technology in Lausanne. “The new data clearly show that kids at age 6 have brain mechanisms that generate an experience of being a self located inside one’s own body.” He suspects that a beginner’s version of “my body is me” emerges by age 4. |© Society for Science & the Public 2000 - 2017.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 23446 - Posted: 04.04.2017

By Anna Buckley BBC Science Radio Unit In an infamous memo written in 1965, the philosopher Hubert Dreyfus stated that humans would always beat computers at chess because machines lacked intuition. Daniel Dennett disagreed. A few years later, Dreyfus rather embarrassingly found himself in checkmate against a computer. And in May 1997 the IBM computer, Deep Blue defeated the world chess champion Garry Kasparov. Many who were unhappy with this result then claimed that chess was a boringly logical game. Computers didn't need intuition to win. The goalposts shifted. Daniel Dennett has always believed our minds are machines. For him the question is not can computers be human? But are humans really that clever? In an interview with BBC Radio 4's The Life Scientific, Dennett says there's nothing special about intuition. "Intuition is simply knowing something without knowing how you got there". Dennett blames the philosopher Rene Descartes for permanently polluting our thinking about how we think about the human mind. Descartes couldn't imagine how a machine could be capable of thinking, feeling and imagining. Such talents must be God-given. He was writing in the 17th century, when machines were made of levers and pulleys not CPUs and RAM, so perhaps we can forgive him. Our brains are made of a hundred billion neurons. If you were to count all the neurons in your brain at a rate of one a second, it would take more than 3,000 years. Our minds are made of molecular machines, otherwise known as brain cells. And if you find this depressing then you lack imagination, says Dennett. "Do you know the power of a machine made of a trillion moving parts?", he asks. "We're not just are robots", he says. "We're robots, made of robots, made of robots". Our brain cells are robots that respond to chemical signals. The motor proteins they create are robots. And so it goes on. © 2017 BBC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23445 - Posted: 04.04.2017

By George Johnson Who knows what Arturo the polar bear was thinking as he paced back and forth in the dark, air-conditioned chamber behind his artificial grotto? Just down the pathway Cecilia sat quietly in her cage, contemplating whatever chimpanzees contemplate. The idea that something resembling a subjective, contemplative mind exists in other animals has become mainstream — and not just for apes. In recent years, both creatures, inhabitants of the Mendoza Zoological Park in Argentina, have been targets of an international campaign challenging the morality of holding animals captive as living museum exhibits. The issue is not so much physical abuse as mental abuse — the effect confinement has on the inhabitants’ minds. Last July, a few months after I visited the zoo, Arturo, promoted by animal rights activists as “the world’s saddest polar bear,” died of what his keepers said were complications of old age. (His mantle has now been bestowed on Pizza, a polar bear on display at a Chinese shopping mall.) But Cecilia (the “loneliest chimp,” some sympathizers have called her) has been luckier, if luck is a concept a chimpanzee can understand. In November, Judge María Alejandra Mauricio of the Third Court of Guarantees in Mendoza decreed that Cecilia is a “nonhuman person” — one that was being denied “the fundamental right” of all sentient beings “to be born, to live, grow, and die in the proper environment for their species.” Copyright 2017 Undark

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 1: Biological Psychology: Scope and Outlook
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 1: An Introduction to Brain and Behavior
Link ID: 23437 - Posted: 04.01.2017

By Christof Koch We moderns take it for granted that consciousness is intimately tied up with the brain. But this assumption did not always hold. For much of recorded history, the heart was considered the seat of reason, emotion, valor and mind. Indeed, the first step in mummification in ancient Egypt was to scoop out the brain through the nostrils and discard it, whereas the heart, the liver and other internal organs were carefully extracted and preserved. The pharaoh would then have access to everything he needed in his afterlife. Everything except for his brain! Several millennia later Aristotle, one of the greatest of all biologists, taxonomists, embryologists and the first evolutionist, had this to say: “And of course, the brain is not responsible for any of the sensations at all. The correct view [is] that the seat and source of sensation is the region of the heart.” He argued consistently that the primary function of the wet and cold brain is to cool the warm blood coming from the heart. Another set of historical texts is no more insightful on this question. The Old and the New Testaments are filled with references to the heart but entirely devoid of any mentions of the brain. Debate about what the brain does grew ever more intense over ensuing millennia. The modern embodiment of these arguments seeks to identify the precise areas within the three-pound cranial mass where consciousness arises. What follows is an attempt to size up the past and present of this transmillennial journey. The field has scored successes in delineating a brain region that keeps the neural engine humming. Switched on, you are awake and conscious. In another setting, your body is asleep, yet you still have experiences—you dream. In a third position, you are deeply asleep, effectively off-line. © 2017 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23361 - Posted: 03.16.2017

Is there life after death for our brains? It depends. Loretta Norton, a doctoral student at Western University in Canada, was curious, so she and her collaborators asked critically ill patients and their families if they could record brain activity in the half hours before and after life support was removed. They ended up recording four patients with electroencephalography, better known as EEG, which uses small electrodes attached to a person’s head to measure electrical activity in the brain. In three patients, the EEG showed brain activity stopping up to 10 minutes before the person’s heart stopped beating. But in a fourth, the EEG picked up so-called delta wave bursts up to 10 minutes after the person’s heart stopped. Delta waves are associated with deep sleep, also known as slow-wave sleep. In living people, neuroscientists consider slow-wave sleep to be a key process in consolidating memories. The study also raises questions about the exact moment when death occurs. Here’s Neuroskeptic: Another interesting finding was that the actual moment at which the heart stopped was not associated with any abrupt change in the EEG. The authors found no evidence of the large “delta blip” (the so-called “death wave“), an electrical phenomena which has been observed in rats following decapitation. With only four patients, it’s difficult to draw any sort of broad conclusion from this study. But it does suggest that death may be a gradual process as opposed to a distinct moment in time. © 1996-2017 WGBH Educational Foundation

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23348 - Posted: 03.13.2017

Sara Reardon Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness. Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans1. Tracing all the branches of a neuron using conventional methods is a massive task. Researchers inject individual cells with a dye, slice the brain into thin sections and then trace the dyed neuron’s path by hand. Very few have been able to trace a neuron through the entire organ. This new method is less invasive and scalable, saving time and effort. Koch and his colleagues engineered a line of mice so that a certain drug activated specific genes in claustrum neurons. When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes. That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain and used a computer program to create a 3D reconstruction of just three glowing cells. © 2017 Macmillan Publishers Limited

Related chapters from BP7e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Consciousness
Link ID: 23283 - Posted: 02.25.2017

By Virginia Morell Strange as it might seem, not all animals can immediately recognize themselves in a mirror. Great apes, dolphins, Asian elephants, and Eurasian magpies can do this—as can human kids around age 2. Now, some scientists are welcoming another creature to this exclusive club: carefully trained rhesus monkeys. The findings suggest that with time and teaching, other animals can learn how mirrors work, and thus learn to recognize themselves—a key test of cognition. “It’s a really interesting paper because it shows not only what the monkeys can’t do, but what it takes for them to succeed,” says Diana Reiss, a cognitive psychologist at Hunter College in New York City, who has given the test to dolphins and Asian elephants in other experiments. The mirror self-recognition test (MSR) is revered as a means of testing self-awareness. A scientist places a colored, odorless mark on an animal where it can’t see it, usually the head or shoulder. If the animal looks in the mirror and spontaneously rubs the mark, it passes the exam. Successful species are said to understand the concept of “self” versus “other.” But some researchers wonder whether failure is simply a sign that the exam itself is inadequate, perhaps because some animals can’t understand how mirrors work. Some animals—like rhesus monkeys, dogs, and pigs—don’t recognize themselves in mirrors, but can use them to find food. That discrepancy puzzled Mu-ming Poo, a neurobiologist at the Shanghai Institutes for Biological Sciences in China, and one of the study’s authors. “There must be some transition between that simple mirror use and recognizing yourself,” he says. © 2017 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23224 - Posted: 02.14.2017

Ian Sample Science editor Doctors have used a brain-reading device to hold simple conversations with “locked-in” patients in work that promises to transform the lives of people who are too disabled to communicate. The groundbreaking technology allows the paralysed patients – who have not been able to speak for years – to answer “yes” or “no” to questions by detecting telltale patterns in their brain activity. Three women and one man, aged 24 to 76, were trained to use the system more than a year after they were diagnosed with completely locked-in syndrome, or CLIS. The condition was brought on by amyotrophic lateral sclerosis, or ALS, a progressive neurodegenerative disease which leaves people totally paralysed but still aware and able to think. “It’s the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients, we can now ask them the most critical questions in life,” said Niels Birbaumer, a neuroscientist who led the research at the University of Tübingen. “This is the first time we’ve been able to establish reliable communication with these patients and I think that is important for them and their families,” he added. “I can say that after 30 years of trying to achieve this, it was one of the most satisfying moments of my life when it worked.” © 2017 Guardian News and Media Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 23176 - Posted: 02.01.2017

By Drake Baer Philosophers have been arguing about the nature of will for at least 2,000 years. It’s at the core of blockbuster social-psychology findings, from delayed gratification to ego depletion to grit. But it’s only recently, thanks to the tools of brain imaging, that the act of willing is starting to be captured at a mechanistic level. A primary example is “cognitive control,” or how the brain selects goal-serving behavior from competing processes like so many unruly third-graders with their hands in the air. It’s the rare neuroscience finding that’s immediately applicable to everyday life: By knowing the way the brain is disposed to behaving or misbehaving in accordance to your goals, it’s easier to get the results you’re looking for, whether it’s avoiding the temptation of chocolate cookies or the pull of darkly ruminative thoughts. Jonathan Cohen, who runs a neuroscience lab dedicated to cognitive control at Princeton, says that it underlies just about every other flavor of cognition that’s thought to “make us human,” whether it’s language, problem solving, planning, or reasoning. “If I ask you not to scratch the mosquito bite that you have, you could comply with my request, and that’s remarkable,” he says. Every other species — ape, dog, cat, lizard — will automatically indulge in the scratching of the itch. (Why else would a pup need a post-surgery cone?) It’s plausible that a rat or monkey could be taught not to scratch an itch, he says, but that would probably take thousands of trials. But any psychologically and physically able human has the capacity to do so. “It’s a hardwired reflex that is almost certainly coded genetically,” he says. “But with three words — don’t scratch it — you can override those millions of years of evolution. That’s cognitive control.” © 2017, New York Media LLC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23067 - Posted: 01.07.2017

Perry Link People who study other cultures sometimes note that they benefit twice: first by learning about the other culture and second by realizing that certain assumptions of their own are arbitrary. In reading Colin McGinn’s fine recent piece, “Groping Toward the Mind,” in The New York Review, I was reminded of a question I had pondered in my 2013 book Anatomy of Chinese: whether some of the struggles in Western philosophy over the concept of mind—especially over what kind of “thing” it is—might be rooted in Western language. The puzzles are less puzzling in Chinese. Indo-European languages tend to prefer nouns, even when talking about things for which verbs might seem more appropriate. The English noun inflation, for example, refers to complex processes that were not a “thing” until language made them so. Things like inflation can even become animate, as when we say “we need to combat inflation” or “inflation is killing us at the check-out counter.” Modern cognitive linguists like George Lakoff at Berkeley call inflation an “ontological metaphor.” (The inflation example is Lakoff’s.) When I studied Chinese, though, I began to notice a preference for verbs. Modern Chinese does use ontological metaphors, such as fāzhăn (literally “emit and unfold”) to mean “development” or xὶnxīn (“believe mind”) for “confidence.” But these are modern words that derive from Western languages (mostly via Japanese) and carry a Western flavor with them. “I firmly believe that…” is a natural phrase in Chinese; you can also say “I have a lot of confidence that…” but the use of a noun in such a phrase is a borrowing from the West. © 1963-2016 NYREV, Inc

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23031 - Posted: 12.28.2016

By Susana Martinez-Conde, Stephen L. Macknik We think we know what we want—but do we, really? In 2005 Lars Hall and Petter Johansson, both at Lund University in Sweden, ran an experiment that transformed how cognitive scientists think about choice. The experimental setup looked deceptively simple. A study participant and researcher faced each other across a table. The scientist offered two photographs of young women deemed equally attractive by an independent focus group. The subject then had to choose which portrait he or she found more appealing. Next, the experimenter turned both pictures over, moved them toward the subjects and asked them to pick up the photo they just chose. Subjects complied, unaware that the researcher had just performed a swap using a sleight-of-hand technique known to conjurers as black art. Because your visual neurons are built to detect and enhance contrast, it is very hard to see black on black: a magician dressed in black against a black velvet backdrop can look like a floating head. Hall and Johansson deliberately used a black tabletop in their experiment. The first photos their subjects saw all had black backs. Behind those, however, they hid a second picture of the opposite face with a red back. When the experimenter placed the first portrait face down on the table, he pushed the second photo toward the subject. When participants picked up the red-backed photos, the black-backed ones stayed hidden against the table's black surface—that is, until the experimenter could surreptitiously sweep them into his lap. © 2016 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23021 - Posted: 12.26.2016

Amanda Gefter As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like. Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction. Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22937 - Posted: 12.01.2016