Chapter 14. Attention and Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1469

Matthew Schafer and Daniela Schiller How do animals, from rats to humans, intuit shortcuts when moving from one place to another? Scientists have discovered mental maps in the brain that help animals picture the best routes from an internalized model of their environments. Physical space is not all that is tracked by the brain's mapmaking capacities. Cognitive models of the environment may be vital to mental processes, including memory, imagination, making inferences and engaging in abstract reasoning. Most intriguing is the emerging evidence that maps may be involved in tracking the dynamics of social relationships: how distant or close individuals are to one another and where they reside within group hierarchies. We are often told that there are no shortcuts in life. But the brain—even the brain of a rat—is wired in a way that completely ignores this kind of advice. The organ, in fact, epitomizes a shortcut-finding machine. The first indication that the brain has a knack for finding alternative routes was described in 1948 by Edward Tolman of the University of California, Berkeley. Tolman performed a curious experiment in which a hungry rat ran across an unpainted circular table into a dark, narrow corridor. The rat turned left, then right, and then took another right and scurried to the far end of a well-lit narrow strip, where, finally, a cup of food awaited. There were no choices to be made. The rat had to follow the one available winding path, and so it did, time and time again, for four days. On the fifth day, as the rat once again ran straight across the table into the corridor, it hit a wall—the path was blocked. The animal went back to the table and started looking for alternatives. Overnight, the circular table had turned into a sunburst arena. Instead of one track, there were now 18 radial paths to explore, all branching off from the sides of the table. After venturing out a few inches on a few different paths, the rat finally chose to run all the way down path number six, the one leading directly to the food. © 2020 Scientific American,

Keyword: Attention
Link ID: 26961 - Posted: 01.15.2020

By Gareth Cook One of science’s most challenging problems is a question that can be stated easily: Where does consciousness come from? In his new book Galileo’s Error: Foundations for a New Science of Consciousness, philosopher Philip Goff considers a radical perspective: What if consciousness is not something special that the brain does but is instead a quality inherent to all matter? It is a theory known as “panpsychism,” and Goff guides readers through the history of the idea, answers common objections (such as “That’s just crazy!”) and explains why he believes panpsychism represents the best path forward. He answered questions from Mind Matters editor Gareth Cook. Can you explain, in simple terms, what you mean by panpsychism? In our standard view of things, consciousness exists only in the brains of highly evolved organisms, and hence consciousness exists only in a tiny part of the universe and only in very recent history. According to panpsychism, in contrast, consciousness pervades the universe and is a fundamental feature of it. This doesn’t mean that literally everything is conscious. The basic commitment is that the fundamental constituents of reality—perhaps electrons and quarks—have incredibly simple forms of experience. And the very complex experience of the human or animal brain is somehow derived from the experience of the brain’s most basic parts. It might be important to clarify what I mean by “consciousness,” as that word is actually quite ambiguous. Some people use it to mean something quite sophisticated, such as self-awareness or the capacity to reflect on one’s own existence. This is something we might be reluctant to ascribe to many nonhuman animals, never mind fundamental particles. But when I use the word consciousness, I simply mean experience: pleasure, pain, visual or auditory experience, et cetera. © 2020 Scientific American,

Keyword: Consciousness
Link ID: 26959 - Posted: 01.15.2020

By Joseph Stern, M.D. The bullet hole in the teenager’s forehead was so small, it belied the damage already done to his brain. The injury was fatal. We knew this the moment he arrived in the emergency room. Days later, his body was being kept alive in the intensive care unit despite an exam showing that he was brain-dead and no blood was flowing to his brain. Eventually, all his organs failed and his heart stopped beating. But the nurses continued to care for the boy and his family, knowing he was already dead but trying to help the family members with the agonizing process of accepting his death. This scenario occurs all too frequently in the neurosurgical I.C.U. Doctors often delay the withdrawal of life-sustaining supports such as ventilators and IV drips, and nurses continue these treatments — adhering to protocols, yet feeling internal conflict. A lack of consensus or communication among doctors, nurses and families often makes these situations more difficult for all involved. Brain death is stark and final. When the patient’s brain function has ceased, bodily death inevitably follows, no matter what we do. Continued interventions, painful as they may be, are necessarily of limited duration. We can keep a brain-dead patient’s body alive for a few days at the most before his heart stops for good. Trickier and much more common is the middle ground of a neurologically devastating injury without brain death. Here, decisions can be more difficult, and electing to continue or to withdraw treatment much more problematic. Inconsistent communication and support between medical staff members and families plays a role. A new field, neuropalliative care, seeks to focus “on outcomes important to patients and families” and “to guide and support patients and families through complex choices involving immense uncertainty and intensely important outcomes of mind and body.” © 2020 The New York Times Company

Keyword: Consciousness
Link ID: 26958 - Posted: 01.14.2020

By John Horgan Last month I participated in a symposium hosted by the Center for Theory & Research at Esalen, a retreat center in Big Sur, California. Fifteen men and women representing physics, psychology and other fields attempted to make sense of mystical and paranormal experiences, which are generally ignored by conventional, materialist science. The organizers invited me because of my criticism of hard-core materialism and interest in mysticism, but in a recent column I pushed back against ideas advanced at the meeting. Below other attendees push back against me. My fellow speaker Bjorn Ekeberg, whose response is below, took the photos of Esalen, including the one of me beside a stream (I'm the guy on the right). -- John Horgan Jeffrey Kripal, philosopher of religion at Rice University and author, most recently, of The Flip: Epiphanies of Mind and the Future of Knowledge (see our online chats here and here): Thank you, John, for reporting on your week with us all. As one of the moderators of “Physics, Experience and Metaphysics,” let me try to reply, briefly (and too simplistically), to your various points. First, let me begin with something that was left out of your generous summary: the key role of the imagination in so many exceptional or anomalous experiences. As you yourself pointed out with respect to your own psychedelic opening, this is no ordinary or banal “imagination.” This is a kind of “super-imagination” that projects fantastic visionary displays that none of us could possibly come up with in ordinary states: this is a flying caped Superman to our bespectacled Clark Kent. None of this, of course, implies that anything seen in these super-imagined states is literally true (like astral travel or ghosts) or non-human, but it does tell us something important about why the near-death or psychedelic experiencers commonly report that these visionary events are “more real” than ordinary reality (which is also, please note, partially imagined, if our contemporary neuroscience of perception is correct). Put in terms of a common metaphor that goes back to Plato, the fictional movies on the screen can ALL be different and, yes, of course, humanly and historically constructed, but the Light projecting them can be quite Real and the Same. Fiction and reality are in no way exclusive of one another in these paradoxical states. © 2020 Scientific American

Keyword: Consciousness
Link ID: 26953 - Posted: 01.13.2020

By John Horgan I just spent a week at a symposium on the mind-body problem, the deepest of all mysteries. The mind-body problem--which encompasses consciousness, free will and the meaning of life--concerns who we really are. Are we matter, which just happens to give rise to mind? Or could mind be the basis of reality, as many sages have insisted? The week-long powwow, called “Physics, Experience and Metaphysics,” took place at Esalen Institute, the legendary retreat center in Big Sur, California. Fifteen men and women representing physics, psychology, philosophy, religious studies and other fields sat in a room overlooking the Pacific and swapped mind-body ideas. What made the conference unusual, at least for me, was the emphasis on what were called “exceptional experiences,” involving telepathy, telekinesis, astral projection, past-life recall and mysticism. I’ve been obsessed with mysticism since I was a kid. As defined by William James in The Varieties of Religious Experience, mystical experiences are breaches in your ordinary life, during which you encounter absolute reality--or, if you prefer, God. You believe, you know, you are seeing things the way they really are. These experiences are usually brief, lasting only minutes or hours. They can be triggered by trauma, prayer, meditation or drugs, or they may strike you out of the blue. Advertisement I’ve had mild mystical intuitions while sober, for example, during a Buddhist retreat last year. But my most intense experience, by far, happened in 1981 while I was under the influence of a potent hallucinogen. I tried to come to terms with my experiences in my book Rational Mysticism, but my obsession endures. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26924 - Posted: 12.30.2019

By Sarah Bate Alice is six years old. She struggles to make friends at school and often sits alone in the playground. She loses her parents in the supermarket and approaches strangers at pickup. Once she became separated from her family on a trip to the zoo, and she now has an intense fear of crowded places. Alice has a condition called face blindness, also known as prosopagnosia. This difficulty in recognising facial identity affects 2 percent of the population. Like Alice, most of these people are born with the condition, although a small number acquire face-recognition difficulties after brain injury or illness. Unfortunately, face blindness seems largely resilient to improvement. Yet a very recent study offers more promising findings: children’s face-recognition skills substantially improved after they played a modified version of the game Guess Who?over a two-week period. In the traditional version of Guess Who?, two players see an array of 24 cartoon faces, and each selects a target. Both then take turns asking yes/no questions about the appearance of their opponent’s chosen face, typically inquiring about eye color, hairstyle and accessories such as hats or spectacles. The players use the answers to eliminate faces in the array; when only one remains, they can guess the identity of their opponent’s character. The experimental version of the game preserved this basic setup but used lifelike faces that differed only in the size or spacing of the eyes, nose or mouth. That is, the hairstyle and outer face shape were identical, and children had to read the faces solely on the basis of small differences between the inner features. This manipulation is thought to reflect a key processing strategy that underlies human face recognition: the ability to account not only for the size and shape of features but also the spacing between them. Evidence suggests this ability to process faces “holistically” is impaired in face blindness. The Guess Who? training program aimed to capitalize on this link. Children progressed through 10 levels of the game, with differences between the inner features becoming progressively less obvious. Children played for half an hour per day on any 10 days over a two-week period, advancing to the next level when they won the game on two consecutive rounds. © 2019 Scientific American

Keyword: Attention
Link ID: 26921 - Posted: 12.27.2019

By John Horgan Philosophy has taken a beating lately, even, or especially, from philosophers, who are compulsive critics, even, especially, of their own calling. But bright young women and men still aspire to be full-time truth-seekers in our corrupt, capitalist world. Over the past five years, I have met a bunch of impressive young philosophers while doing research on the mind-body problem. Hedda Hassel Mørch, for example. I first heard Mørch (pronounced murk) speak in 2015 at a New York University workshop on integrated information theory, and I ran into her at subsequent events at NYU and elsewhere. She makes a couple of appearances—one anonymous--in my book Mind-Body Problems. We recently crossed tracks in online chitchat about panpsychism, which proposes that consciousness is a property of all matter, not just brains. I’m a panpsychism critic, she’s a proponent. Below Mørch answers some questions.—John Horgan Horgan: Why philosophy? And especially philosophy of mind? Mørch: I remember thinking at some point that if I didn’t study philosophy I would always be curious about what philosophers know. And even if it turned out that they know nothing then at least I would know I wasn’t missing anything. Advertisement One reason I was attracted to philosophy of mind in particular was that it seemed like an area where philosophy clearly has some real and useful work to do. In other areas of philosophy, it might seem that many central questions can either be deflated or taken over by science. For example, in ethics, one might think there are no moral facts and so all we can do is figure out what we mean by the words “right” and “wrong”. And in metaphysics, questions such as “is the universe infinite” can now, at least arguably, be understood as scientific questions. But consciousness is a phenomenon which is obviously real, and the question of how it arises from the brain is clearly a substantive, not merely verbal question, which does not seem tractable by science as we know it. As David Chalmers says, science as we know it can only tackle the so-called easy problems of consciousness, not the hard problem. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26904 - Posted: 12.19.2019

By Gretchen Reynolds Top athletes’ brains are not as noisy as yours and mine, according to a fascinating new study of elite competitors and how they process sound. The study finds that the brains of fit, young athletes dial down extraneous noise and attend to important sounds better than those of other young people, suggesting that playing sports may change brains in ways that alter how well people sense and respond to the world around them. For most of us with normal hearing, of course, listening to and processing sounds are such automatic mental activities that we take them for granted. But “making sense of sound is actually one of the most complex jobs we ask of our brains,” says Nina Kraus, a professor and director of the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Ill., who oversaw the new study. Sound processing also can be a reflection of broader brain health, she says, since it involves so many interconnected areas of the brain that must coordinate to decide whether any given sound is familiar, what it means, if the body should respond and how a particular sound fits into the broader orchestration of other noises that constantly bombard us. For some time, Dr. Kraus and her collaborators have been studying whether some people’s brains perform this intricate task more effectively than others. By attaching electrodes to people’s scalps and then playing a simple sound, usually the spoken syllable “da,” at irregular intervals, they have measured and graphed electrical brain wave activity in people’s sound-processing centers. © 2019 The New York Times Company

Keyword: Attention
Link ID: 26901 - Posted: 12.18.2019

By Virginia Morell Dogs may not be able to count to 10, but even the untrained ones have a rough sense of how many treats you put in their food bowl. That’s the finding of a new study, which reveals that our canine pals innately understand quantities in much the same way we do. The study is “compelling and exciting,” says Michael Beran, a psychologist at Georgia State University in Atlanta who was not involved in the research. “It further increases our confidence that [these representations of quantity in the brain] are ancient and widespread among species.” The ability to rapidly estimate the number of sheep in a flock or ripened fruits on a tree is known as the “approximate number system.” Previous studies have suggested monkeys, fish, bees, and dogs have this talent. But much of this research has used trained animals that receive multiple tests and rewards. That leaves open the question of whether the ability is innate in these species, as it is in humans. In the new study, Gregory Berns, a neuroscientist at Emory University in Atlanta, and colleagues recruited 11 dogs from various breeds, including border collies, pitbull mixes, and Labrador golden retriever mixes, to see whether they could find brain activity associated with a sensitivity to numbers. The team, which pioneered canine brain scanning (by getting dogs to voluntarily enter a functional magnetic resonance imaging scanner and remain motionless), had their subjects enter the scanner, rest their heads on a block, and fix their eyes on a screen at the opposite end (see video, above). On the screen was an array of light gray dots on a black background whose number changed every 300 milliseconds. If dogs, like humans and nonhuman primates, have a dedicated brain region for representing quantities, their brains should show more activity there when the number of dots was dissimilar (three small dots versus 10 large ones) than when they were constant (four small dots versus four large dots). © 2019 American Association for the Advancement of Science.

Keyword: Attention; Evolution
Link ID: 26900 - Posted: 12.18.2019

Christof Koch A future where the thinking capabilities of computers approach our own is quickly coming into view. We feel ever more powerful machine-learning (ML) algorithms breathing down our necks. Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft. The birth of true artificial intelligence will profoundly affect humankind’s future, including whether it has one. The following quotes provide a case in point: “From the time the last great artificial intelligence breakthrough was reached in the late 1940s, scientists around the world have looked for ways of harnessing this ‘artificial intelligence’ to improve technology beyond what even the most sophisticated of today’s artificial intelligence programs can achieve.” Advertisement “Even now, research is ongoing to better understand what the new AI programs will be able to do, while remaining within the bounds of today’s intelligence. Most AI programs currently programmed have been limited primarily to making simple decisions or performing simple operations on relatively small amounts of data.” These two paragraphs were written by GPT-2, a language bot I tried last summer. Developed by OpenAI, a San Francisco–based institute that promotes beneficial AI, GPT-2 is an ML algorithm with a seemingly idiotic task: presented with some arbitrary starter text, it must predict the next word. The network isn’t taught to “understand” prose in any human sense. Instead, during its training phase, it adjusts the internal connections in its simulated neural networks to best anticipate the next word, the word after that, and so on. Trained on eight million Web pages, its innards contain more than a billion connections that emulate synapses, the connecting points between neurons. When I entered the first few sentences of the article you are reading, the algorithm spewed out two paragraphs that sounded like a freshman’s effort to recall the gist of an introductory lecture on machine learning during which she was daydreaming. The output contains all the right words and phrases—not bad, really! Primed with the same text a second time, the algorithm comes up with something different. © 2019 Scientific American,

Keyword: Consciousness; Robotics
Link ID: 26894 - Posted: 12.12.2019

By Steve Taylor In the second half of the 19th century, scientific discoveries—in particular, Darwin’s theory of evolution—meant that Christian beliefs were no longer feasible as a way of explaining the world. The authority of the Bible as an explanatory text was fatally damaged. The new findings of science could be utilized to provide an alternative conceptual system to make sense of the world—a system that insisted that nothing existed apart from basic particles of matter, and that all phenomena could be explained in terms of the organization and the interaction of these particles. One of the most fervent of late 19th century materialists, T.H. Huxley, described human beings as “conscious automata” with no free will. As he explained in 1874, “Volitions do not enter into the chain of causation…. The feeling that we call volition is not the cause of a voluntary act, but the symbol of that state of the brain which is the immediate cause." This was a very early formulation of an idea that has become commonplace amongst modern scientists and philosophers who hold similar materialist views: that free will is an illusion. According to Daniel Wegner, for instance, “The experience of willing an act arises from interpreting one’s thought as the cause of the act.” In other words, our sense of making choices or decisions is just an awareness of what the brain has already decided for us. When we become aware of the brain’s actions, we think about them and falsely conclude that our intentions have caused them. You could compare it to a king who believes he is making all his own decisions, but is constantly being manipulated by his advisors and officials, who whisper in his ear and plant ideas in his head. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26881 - Posted: 12.07.2019

By Viorica Marian Psycholinguistics is a field at the intersection of psychology and linguistics, and one if its recent discoveries is that the languages we speak influence our eye movements. For example, English speakers who hear candle often look at a candy because the two words share their first syllable. Research with speakers of different languages revealed that bilingual speakers not only look at words that share sounds in one language but also at words that share sounds across their two languages. When Russian-English bilinguals hear the English word marker, they also look at a stamp, because the Russian word for stamp is marka. Even more stunning, speakers of different languages differ in their patterns of eye movements when no language is used at all. In a simple visual search task in which people had to find a previously seen object among other objects, their eyes moved differently depending on what languages they knew. For example, when looking for a clock, English speakers also looked at a cloud. Spanish speakers, on the other hand, when looking for the same clock, looked at a present, because the Spanish names for clock and present—reloj and regalo—overlap at their onset. The story doesn’t end there. Not only do the words we hear activate other, similar-sounding words—and not only do we look at objects whose names share sounds or letters even when no language is heard—but the translations of those names in other languages become activated as well in speakers of more than one language. For example, when Spanish-English bilinguals hear the word duck in English, they also look at a shovel, because the translations of duck and shovel—pato and pala, respectively—overlap in Spanish. © 2019 Scientific American

Keyword: Language; Attention
Link ID: 26875 - Posted: 12.06.2019

By Gaby Maimon What is the biological basis of thought? How do brains store memories? Questions like these have intrigued humanity for millennia, but the answers still remain largely elusive. You might think that the humble fruit fly, Drosophila melanogaster, has little to add here, but since the 1970s, scientists have actually been studying the neural basis of higher brain functions, like memory, in these insects. Classic work––performed by several labs, including those of Martin Heisenberg and Seymour Benzer––focused on studying the behavior of wild-type and genetically mutated Drosophila in simple learning and memory tasks, ultimately leading to the discovery of several key molecules and other underlying mechanisms. However, because one could not peer into the brain of behaving flies to eavesdrop on neurons in action, this field, in its original form, could only go so far in helping to explain the mechanisms of cognition. In 2010, when I was a postdoctoral researcher in the lab of Michael Dickinson, we developed the first method for measuring electrical activity of neurons in behaving Drosophila. A similar method was developed in parallel by Johannes Seelig and Vivek Jayaraman. In these approaches, one glues a fly to a custom plate that allows one to carefully remove the cuticle over the brain and measure neural activity via electrodes or fluorescence microscopy. Even though the fly is glued in place, the animal can still flap her wings in tethered flight or walk on an air-cushioned ball, which acts like a spherical treadmill beneath her legs. These technical achievements attracted the attention of the Drosophila neurobiology community, but should anyone really care about seeing a fly brain in action beyond this small, venerable, group of arthropod-loving nerds (of which I'm honored to be a member)? In other words, will these methods help to reveal anything of general relevance beyond flies? Increasingly, the answer looks to be yes. © 2019 Scientific American

Keyword: Learning & Memory
Link ID: 26871 - Posted: 12.04.2019

By John Williams Credit...Sonny Figueroa/The New York Times Let’s get right to the overwhelming question: What does it mean to experience something? Chances are that when you recount a great meal to a friend, you don’t say that it really lit up your nucleus of the solitary tract. But chances are equally good, if you adhere to conventional scientific and philosophical wisdom, that you believe the electrical activity in that part of your brain is what actually accounts for the sensation when you dine. In “Out of My Head,” the prolific British writer Tim Parks adds to the very long shelf of books about what he calls the “deep puzzle of minute-by-minute perception.” The vast majority of us — and this is undoubtedly for the best, life being hard enough — don’t get tripped up by the ontological mysteries of our minute-to-minute perceiving. We just perceive. But partly because it remains a stubborn philosophical problem, rather than a neatly explained scientific one, consciousness — our awareness, our self-awareness, our self — makes for an endlessly fascinating subject. And Parks, though not a scientist or professional philosopher, proves to be a companionable guide, even if his book is more an appetizer than a main course. He wants “simply” to ask, he writes, whether “we ordinary folks” can say anything useful about consciousness. But he also wants to poke skeptically at the “now standard view of conscious experience as something locked away in the head,” the “dominant internalist model which assumes the brain is some kind of supercomputer.” “Out of My Head” was inspired, in large part, by the theories of Riccardo Manzotti, a philosopher, roboticist and friend of Parks with whom the author has had “one of the most intense and extended conversations of my life.” (A good chunk of that conversation appeared as a 15-part dialogue on the website of The New York Review of Books, whose publishing arm has released “Out of My Head.”) © 2019 The New York Times Company

Keyword: Consciousness
Link ID: 26842 - Posted: 11.22.2019

By Gabriel Finkelstein Unlike Charles Darwin and Claude Bernard, who endure as heroes in England and France, Emil du Bois-Reymond is generally forgotten in Germany — no streets bear his name, no stamps portray his image, no celebrations are held in his honor, and no collections of his essays remain in print. Most Germans have never heard of him, and if they have, they generally assume that he was Swiss. But it wasn’t always this way. Du Bois-Reymond was once lauded as “the foremost naturalist of Europe,” “the last of the encyclopedists,” and “one of the greatest scientists Germany ever produced.” Contemporaries celebrated him for his research in neuroscience and his addresses on science and culture; in fact, the poet Jules Laforgue reported seeing his picture hanging for sale in German shop windows alongside those of the Prussian royal family. Those familiar with du Bois-Reymond generally recall his advocacy of understanding biology in terms of chemistry and physics, but during his lifetime he earned recognition for a host of other achievements. He pioneered the use of instruments in neuroscience, discovered the electrical transmission of nerve signals, linked structure to function in neural tissue, and posited the improvement of neural connections with use. He served as a professor, as dean, and as rector at the University of Berlin, directed the first institute of physiology in Prussia, was secretary of the Prussian Academy of Sciences, established the first society of physics in Germany, helped found the Berlin Society of Anthropology, oversaw the Berlin Physiological Society, edited the leading German journal of physiology, supervised dozens of researchers, and trained an army of physicians. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26811 - Posted: 11.11.2019

By Dan Falk At the moment, you’re reading these words and, presumably, thinking about what the words and sentences mean. Or perhaps your mind has wandered, and you’re thinking about dinner, or looking forward to bingeing the latest season of “The Good Place.” But you’re definitely experiencing something. How is that possible? Every part of you, including your brain, is made of atoms, and each atom is as lifeless as the next. Your atoms certainly don’t know or feel or experience anything, and yet you — a conglomeration of such atoms — have a rich mental life in which a parade of experiences unfolds one after another. The puzzle of consciousness has, of course, occupied the greatest minds for millennia. The philosopher David Chalmers has called the central mystery the “hard problem” of consciousness. Why, he asks, does looking at a red apple produce the experience of seeing red? And more generally: Why do certain arrangements of matter experience anything? Anyone who has followed the recent debates over the nature of consciousness will have been struck by the sheer variety of explanations on offer. Many prominent neuroscientists, cognitive scientists, philosophers, and physicists have put forward “solutions” to the puzzle — all of them wildly different from, and frequently contradicting, each other. “‘You,’ your joys and your sorrows, your memories and ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.”

Keyword: Consciousness
Link ID: 26802 - Posted: 11.08.2019

By Erica Tennenhouse Live in the urban jungle long enough, and you might start to see things—in particular humanmade objects like cars and furniture. That’s what researchers found when they melded photos of artificial items with images of animals and asked 20 volunteers what they saw. The people, all of whom lived in cities, overwhelmingly noticed the manufactured objects whereas the animals faded into the background. To find out whether built environments can alter peoples’ perception, the researchers gathered hundreds of photos of animals and artificial objects such as bicycles, laptops, or benches. Then, they superimposed them to create hybrid images—like a horse combined with a table (above, top left) or a rhinoceros combined with a car (above, bottom right). As volunteers watched the hybrids flash by on a screen, they categorized each as a small animal, a big animal, a small humanmade object, or a big humanmade object. Overall, volunteers showed a clear bias toward the humanmade objects, especially when they were big, the researchers report today in the Proceedings of the Royal Society B. The bias itself was a measure of how much the researchers had to visually “amp up” an image before participants saw it instead of its partner image. That bias suggests people’s perceptions are fundamentally altered by their environments, the researchers say. Humans often rely on past experiences to process new information—the classic example is mistaking a snake for a garden hose. But in this case, living in industrialized nations—where you are exposed to fewer “natural” objects—could change the way you view the world. © 2019 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 26793 - Posted: 11.06.2019

By Christof Koch “And death shall have no dominion”—Dylan Thomas, 1933 You will die, sooner or later. We all will. For everything that has a beginning has an end, an ineluctable consequence of the second law of thermodynamics. Few of us like to think about this troubling fact. But once birthed, the thought of oblivion can’t be completely erased. It lurks in the unconscious shadows, ready to burst forth. In my case, it was only as a mature man that I became fully mortal. I had wasted an entire evening playing an addictive, first-person shooter video game—running through subterranean halls, flooded corridors, nightmarishly turning tunnels, and empty plazas under a foreign sun, firing my weapons at hordes of aliens relentlessly pursuing me. I went to bed, easily falling asleep but awoke abruptly a few hours later. Abstract knowledge had turned to felt reality—I was going to die! Not right there and then but eventually. Advertisement Evolution equipped our species with powerful defense mechanisms to deal with this foreknowledge—in particular, psychological suppression and religion. The former prevents us from consciously acknowledging or dwelling on such uncomfortable truths while the latter reassures us by promising never-ending life in a Christian heaven, an eternal cycle of Buddhist reincarnations or an uploading of our mind to the Cloud, the 21st-century equivalent of rapture for nerds. Death has no such dominion over nonhuman animals. Although they can grieve for dead offspring and companions, there is no credible evidence that apes, dogs, crows and bees have minds sufficiently self-aware to be troubled by the insight that one day they will be no more. Thus, these defense mechanisms must have arisen in recent hominin evolution, in less than 10 million years. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26780 - Posted: 11.01.2019

By Zeynep Tufekci More than a billion people around the world have smartphones, almost all of which come with some kind of navigation app such as Google or Apple Maps or Waze. This raises the age-old question we encounter with any technology: What skills are we losing? But also, crucially: What capabilities are we gaining? Talking with people who are good at finding their way around or adept at using paper maps, I often hear a lot of frustration with digital maps. North/south orientation gets messed up, and you can see only a small section at a time. And unlike with paper maps, one loses a lot of detail after zooming out. I can see all that and sympathize that it may be quite frustrating for the already skilled to be confined to a small phone screen. (Although map apps aren’t really meant to be replacements for paper maps, which appeal to our eyes, but are actually designed to be heard: “Turn left in 200 feet. Your destination will be on the right.”) But consider what digital navigation aids have meant for someone like me. Despite being a frequent traveler, I’m so terrible at finding my way that I still use Google Maps almost every day in the small town where I have lived for many years. What looks like an inferior product to some has been a significant expansion of my own capabilities. I’d even call it life-changing. Part of the problem is that reading paper maps requires a specific skill set. There is nothing natural about them. In many developed nations, including the U.S., one expects street names and house numbers to be meaningful referents, and instructions such as “go north for three blocks and then west” make sense to those familiar with these conventions. In Istanbul, in contrast, where I grew up, none of those hold true. For one thing, the locals rarely use street names. Why bother when a government or a military coup might change them—again. House and apartment numbers often aren’t sequential either because after buildings 1, 2 and 3 were built, someone squeezed in another house between 1 and 2, and now that’s 4. But then 5 will maybe get built after 3, and 6 will be between 2 and 3. Good luck with 1, 4, 2, 6, 5, and so on, sometimes into the hundreds, in jumbled order. Besides, the city is full of winding, ancient alleys that intersect with newer avenues at many angles. © 2019 Scientific American

Keyword: Attention; Learning & Memory
Link ID: 26768 - Posted: 10.30.2019

by Emily Anthes The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, launched by the U.S. National Institutes of Health (NIH) in 2013, has a lofty goal: to unravel the cellular basis of cognition and behavior. Since the initiative’s launch, the NIH has doled out about $1 billion to researchers who are developing tools and technologies to map, measure and observe the brain’s neural circuits. Along the way, the agency has also tried to explore the ethical implications of this research. Khara Ramos, who directs the neuroethics program at the NIH’s National Institute of Neurological Disorders and Stroke, described the emerging field of neuroethics today at the 2019 Society for Neuroscience annual meeting in Chicago, Illinois. Spectrum: Was discussion about ethics part of the BRAIN Initiative from the beginning? Khara Ramos: We knew that we needed to do something with neuroethics, but it took time for us to figure out what exactly, in part because neuroethics is a relatively new field. Bioethics is a broad field that covers all aspects of biomedicine, but there isn’t specialization of bioethics in kidney research or pulmonary research the way there is in neuroscience research, and that’s really because the brain is so intimately connected with who we are. Neuroscience research raises these unique ethical questions, such as: How might new neurotechnologies alter fundamental notions of agency or autonomy or identity? We’re starting to focus on data sharing and privacy from a philosophical, conceptual perspective: Is there something unique about brain data that is different from, for instance, genetic data? How do researchers themselves feel about data sharing and privacy? And how does the public view it? For instance, is my social security number more or less sensitive than the kinds of neural data that somebody might be able to get if I were participating in a clinical trial? © 2019 Simons Foundation

Keyword: Autism; Attention
Link ID: 26725 - Posted: 10.21.2019