Chapter 18. Attention and Higher Cognition

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1370

By Joshua Tan Recently, a blog by Tam Hunt was published at Scientific American which provocatively declared that “The Hippies Were Right: It’s All About Vibrations, Man.” Hunt’s claim is that consciousness emerges from resonant effects found in nature at a wide range of scales. This is reminiscent of arguments that have been made since the development of the science of thermodynamics more than two hundred years ago. In brief, very intriguing and surprising characteristics of complex systems have been discovered and rigorously defined with such tantalizing terms as “emergence,” “resonance” and “self-organization.” These kinds of features of the natural world are so amazing—even uncanny—that they have inspired wild speculation as to their possible implications. Are there deep connections between these phenomena and the more mysterious aspects of our existence such as life, consciousness, and intelligence? Might they even provide us with insight into possible answers to expansively fundamental questions like why there is something rather than nothing? Speculating on such mysteries is an understandable pastime. Diverse thinkers from physicists to philosophers, psychologists to theologians have written libraries worth of treatises attempting to shed light on the possible answers to these deep questions. Along the way, ideas inspired by scientific results have had varying degrees of success. Concepts such as animal magnetism, vitalism, synchronicity, and quantum mysticism all had their day in the Sun, only to end up debunked or dismissed by skeptics and scientists who either pointed out a lack of empirical data supporting the claims or showed that the ideas were incompatible with what we have discovered about the natural world. © 2018 Scientific American

Keyword: Consciousness
Link ID: 25779 - Posted: 12.12.2018

Jef Akst Alan McElligott, an animal behavior researcher at the University of Roehampton in the UK, continues to be impressed by goats. Since he started studying the charismatic ungulates a decade ago, he’s found that mothers remember the calls of their kids several months after they’ve been separated, and that goats can solve a two-step puzzle box akin to those typically used in primate research—and remember how to do it a year later. Now his team has found that goats at the Buttercups Sanctuary in Kent, UK, can distinguish between happy and angry human expressions. “Given some of the other things that we’ve found out about goats, I guess we shouldn’t really be that surprised,” says McElligott, who’s hoping to improve welfare guidelines for the animals by revealing their smart and social nature. McElligott’s experiment was simple. Working with 20 goats at the sanctuary, he and his colleagues presented each with two black-and-white images—one of a person smiling, and the other of the same person making an angry expression—then sat back and watched what the animal did. “If the goats ignored the photographs, for example, or walked up to the photographs and ripped them off metal panels and chewed on them, would I have been shocked? Possibly not,” says McElligott. “But . . . the goats did seem to take the time to have a look at these photographs and actually study them, believe it or not.” And based on the time they spent interacting with each image, the goats seemed to prefer the happy snapshot (R Soc Open Sci, 5:180491, 2018). © 1986 - 2018 The Scientist

Keyword: Emotions; Attention
Link ID: 25777 - Posted: 12.12.2018

Alison Abbott Doris Tsao launched her career deciphering faces — but for a few weeks in September, she struggled to control the expression on her own. Tsao had just won a MacArthur Foundation ‘genius’ award, an honour that comes with more than half a million dollars to use however the recipient wants. But she was sworn to secrecy — even when the foundation sent a film crew to her laboratory at the California Institute of Technology (Caltech) in Pasadena. Thrilled and embarrassed at the same time, she had to invent an explanation, all while keeping her face in check. It was her work on faces that won Tsao awards and acclaim. Last year, she cracked the code that the brain uses to recognize faces from a multitude of minuscule differences in shapes, distances between features, tones and textures. The simplicity of the coding surprised and impressed the neuroscience community. “Her work has been transformative,” says Tom Mrsic-Flogel, director of the Sainsbury Wellcome Centre for Neural Circuits and Behaviour at University College London. But Tsao doesn’t want to be remembered just as the scientist who discovered the face code. It is a means to an end, she says, a good tool for approaching the question that really interests her: how does the brain build up a complete, coherent model of the world by filling in gaps in perception? “This idea has an elegant mathematical formulation,” she says, but it has been notoriously hard to put to the test. Tsao now has an idea of how to begin. © 2018 Springer Nature Publishing AG

Keyword: Attention
Link ID: 25773 - Posted: 12.11.2018

By Tam Hunt Why are some things conscious and others apparently not? Is a rat conscious? A bat? A cockroach? A bacterium? An electron? These questions are all aspects of the ancient “mind-body problem,” which has resisted a generally satisfying conclusion for thousands of years. The mind-body problem enjoyed a major rebranding over the last two decades and is generally known now as the “hard problem” of consciousness (usually capitalized nowadays), after the New York University philosopher David Chalmers coined this term in a now classic 1995 paper and his 1996 book The Conscious Mind: In Search of a Fundamental Theory. Fast forward to the present era and we can ask ourselves now: Did the hippies actually solve this problem? My colleague Jonathan Schooler (University of California, Santa Barbara) and I think they effectively did, with the radical intuition that it’s all about vibrations … man. Over the past decade, we have developed a “resonance theory of consciousness” that suggests that resonance—another word for synchronized vibrations—is at the heart of not only human consciousness but of physical reality more generally. So how were the hippies right? Well, we agree that vibrations, resonance, are the key mechanism behind human consciousness, as well as animal consciousness more generally. And, as I’ll discuss below, that they are the basic mechanism for all physical interactions to occur. © 2018 Scientific American

Keyword: Consciousness
Link ID: 25751 - Posted: 12.06.2018

By Stephen L. Macknik Sensory information flowing into our brains is inherently ambiguous. We perceive 3D despite having only 2D images on our retinas. It’s an illusion. A sunburn on our face can feel weirdly cool. Illusion. A little perfume smells good but too much is obnoxious. Also an illusion. The brain expends a great deal of effort to disambiguate the meaning of each incoming signal—often using context as a clue—but the neural mechanisms of these abilities remain mysterious. Neuroscientists are a little closer to understanding how to study these mechanisms, thanks to a new study by Kevin Ortego, Michael Pitts, & Enriqueta Canseco-Gonzalez from Pitts's lab at Reed College, presented at the 2018 Society for Neuroscience meeting, on the brain's responses to both visual and language illusions. Illusions are experiences in which the physical reality is different from our perception or expectation. Ambiguous stimuli are important tools to science because the physical reality can legitimately be interpreted in more than one way. Take the classic rabbit-duck illusion, published by the Fliegende Blätter magazine, in Münich, at the end of the 19th century, in which the image can be seen as either a duck or a rabbit. Bistable illusions like these can flip back and forth between competing interpretations, but one cannot see both percepts at the same time. Recent examples of ambiguous illusions show that numerous interpretations are possible. The first place winner of this year's Best Illusion of the Year Contest, created by from Kokichi Sugihara, shows three different ways of perceiving the same object, depending on your specific vantage point. © 2018 Scientific American,

Keyword: Language; Attention
Link ID: 25744 - Posted: 12.03.2018

Ashley P. Taylor Electrically stimulating the lateral orbitofrontal cortex, a brain area behind the eyes, improves the moods of people with depression, according to a study published yesterday (November 29) in Current Biology. The technique used by the researchers, led by Edward Chang of the University of California, San Francisco, is called deep brain stimulation (DBS), in which surgically implanted electrodes send electrical pulses to particular areas of the brain. The approach is already in use as a treatment for movement disorders such as Parkinson’s disease and tremors. But results on its ability to treat depression have been mixed, as NPR reports. The researchers worked with 25 epilepsy patients who already had electrodes implanted into their brains as part of their treatments. Many of the study participants also had signs of depression as evaluated by mood tests the researchers administered, Science News reports. The investigators tried stimulating many areas of the brain, and they found that jolts to the lateral orbitofrontal cortex made patients with signs of depression—but not others who didn’t have symptoms—feel better right away. “Wow, I feel a lot better. . . . What did you guys do?” study coauthor Kristin Sellers recalls a patient exclaiming after receiving the stimulation, she tells NPR. “Only the people who had symptoms [of depression] to start with improved their mood, which suggests that perhaps the effect of what we’re doing is to normalize activity that starts off abnormal,” adds another coauthor, Vikram Rao.

Keyword: Depression
Link ID: 25742 - Posted: 12.03.2018

Aimee Cunningham Children who turn 5 just before starting kindergarten are much more likely to be diagnosed with attention-deficit/hyperactivity disorder than their oldest classmates. The finding bolsters concerns that the common neurodevelopmental disorder may be overdiagnosed. “We think ... it’s the relative age and the relative immaturity of the August-born children in any given class that increases the likelihood that they’re diagnosed as having ADHD,” says Anupam Jena, a physician and economist at Harvard Medical School. Jena and his colleagues analyzed insurance claims data for more than 407,000 children born from 2007 through 2009. In states that require kids be 5 years old by September 1 to begin kindergarten, children born in August were 34 percent more likely to be diagnosed with ADHD than those born nearly a year earlier in September — just after the cutoff date. For August kids, 85.1 per 10,000 children were diagnosed with ADHD, compared with 63.6 per 10,000 for the September kids, the researchers report in the Nov. 29 New England Journal of Medicine. People with ADHD typically have symptoms of inattention, hyperactivity and impulsiveness that are severe or frequent enough to interfere with their daily lives. In 2011, 11 percent of U.S. children aged 4 to 17 were reported to have an ADHD diagnosis, a rate higher than most other countries. Differences between states also suggest overdiagnosis, says Jena, “unless there’s something so different about kids across different states.” For example, while nearly 19 percent of 4- to 17-year-olds reportedly were diagnosed in Kentucky, the rate was about 12 percent in neighboring West Virginia. |© Society for Science & the Public 2000 - 2018

Keyword: ADHD
Link ID: 25728 - Posted: 11.29.2018

Abby Olena In 2005, a 23-year-old woman in the UK was involved in a traffic accident that left her with a severe brain injury. Five months after the event, she slept and woke and could open her eyes, but she didn’t always respond to smells or touch or track things visually. In other words, she fit the clinical criteria for being in a vegetative state. In a study published in Science in 2006, a team of researchers tested her ability to imagine herself playing tennis or walking through her house while they observed activity in her brain using functional magnetic resonance imaging (fMRI). Remarkably, her brain responded with activity in the same areas of the brains of healthy people when asked to do the same, indicating that she was capable of complex cognition, despite her apparent unresponsiveness at the bedside. The findings indicated that this patient and others like her may have hidden cognitive abilities that, if found, could potentially help them communicate or improve their prognosis. Since then, researchers and clinicians around the world have used task-based neuroimaging to determine that other patients who appear unresponsive or minimally conscious can do challenging cognitive tasks. The problem is that the tests to uncover hidden consciousness can be complex to analyze, expensive to perform, and hard for all patients to access. “You would like to know if people who look like they’re unconscious are actually following what’s going on and able to carry out cognitive work, and we don’t have an efficient way of sorting those patients,” says Nicholas Schiff, a neuroscientist at Weill Cornell Medical College in New York City. © 1986 - 2018 The Scientist

Keyword: Consciousness; Brain imaging
Link ID: 25725 - Posted: 11.27.2018

By Bahar Gholipour When Ryan Darby was a neurology resident, he was familiar with something called alien limb syndrome, but that did not make his patients’ behavior any less puzzling. Individuals with this condition report that one of their extremities—often a hand—seems to act of its own volition. It might touch and grab things or even unbutton a shirt the other hand is buttoning up. Patients are unable to control the rebellious hand short of grabbing or even sitting on it. They seem to have lost agency—that unmistakable feeling of ownership of one’s actions and an important component of free will. “It was one of those symptoms that really questioned the mind and how it brings about some of those bigger concepts,” says Darby, now an assistant professor of neurology at Vanderbilt University. Alien limb syndrome can arise after a stroke causes a lesion in the brain. But even though patients who have it report the same eccentric symptoms, their lesions do not occur in the same place. “Could the reason be that the lesions were just in different parts of the same brain network?” Darby wondered. To find out, he and his colleagues compiled findings from brain-imaging studies of people with the syndrome. They also looked into akinetic mutism—a condition that leaves patients with no desire to move or speak, despite having no physical impediment. Using a new technique, the researchers compared lesion locations against a template of brain networks—that is, groups of regions that often activate in tandem. © 2018 Scientific American

Keyword: Consciousness
Link ID: 25724 - Posted: 11.27.2018

By Scott Barry Kaufman "We experience ourselves, our thoughts and feelings as something separate from the rest. A kind of optical delusion of consciousness." -- Albert Einstein "In our quest for happiness and the avoidance of suffering, we are all fundamentally the same, and therefore equal. Despite the characteristics that differentiate us - race, language, religion, gender, wealth and many others - we are all equal in terms of our basic humanity." -- Dalai Lama (on twitter) The belief that everything in the universe is part of the same fundamental whole exists throughout many cultures and philosophical, religious, spiritual, and scientific traditions, as captured by the phrase 'all that is.' The Nobel winner Erwin Schrodinger once observed that quantum physics is compatible with the notion that there is indeed a basic oneness of the universe. Therefore, despite it seeming as though the world is full of many divisions, many people throughout the course of human history and even today truly believe that individual things are part of some fundamental entity. Despite the prevalence of this belief, there has been a lack of a well validated measure in psychology that captures this belief. While certain measures of spirituality do exist, the belief in oneness questions are typically combined with other questions that assess other aspects of spirituality, such as meaning, purpose, sacredness, or having a relationship with God. What happens when we secularize the belief in oneness? © 2018 Scientific American

Keyword: Consciousness
Link ID: 25701 - Posted: 11.19.2018

By John Horgan Don't Make Me One with Everything The mystical doctrine of oneness is metaphysically disturbing, and it can foster authoritarian behavior and encourage an unhealthy detachment. Credit: Mark D Callanan Getty Images A recurring claim of sages east and west is that reality, which seems to consist of many things that keep changing, is actually one thing that never changes. This is the mystical doctrine of oneness. Enlightenment supposedly consists of realizing your oneness with reality, hence the old joke: What did the Buddhist say to the hotdog vendor? Make me one with everything. A column by my fellow Scientific American blogger, psychologist Scott Barry Kaufman, touts the oneness doctrine. “The belief that everything in the universe is part of the same fundamental whole exists throughout many cultures and philosophical, religious, spiritual, and scientific traditions,” Kaufman writes. His column considers, as his headline puts it, “What Would Happen If Everyone Truly Believed Everything Is One?" Kaufman notes that psychologists Kate Diebels and Mark Leary have explored this question. They define oneness, among other ways, as the idea that “beneath surface appearances, everything is one,” and “the separation among individual things is an illusion.” Diebels and Leary found that 20 percent of their respondents have thought about oneness “often or many times,” and many report having spiritual experiences related to oneness. Diebels and Leary state that “a belief in oneness was related to values indicating a universal concern for the welfare of other people, as well as greater compassion for other people.” Believers “have a more inclusive identity that reflects their sense of connection with other people, nonhuman animals, and aspects of nature.” © 2018 Scientific American

Keyword: Consciousness
Link ID: 25700 - Posted: 11.19.2018

By Sam Roberts Herbert Fingarette, a contrarian philosopher who, while plumbing the perplexities of personal responsibility, defined heavy drinking as willful behavior rather than as a potential disease, died on Nov. 2 at his home in Berkeley, Calif. He was 97. His daughter, Ann Fingarette Hasse, said the cause was heart failure. Professor Fingarette challenged the theory that alcoholism is a progressive disease that can be dealt with only by abstinence, and he concluded that treatment could include moderated drinking. Many academics and medical professionals denounced those views as heresy. But they were invoked by the United States Supreme Court in the 1988 decision Traynor v. Turnage. In that ruling, the court affirmed the government’s denial of education benefits to two veterans who had argued that they missed filing deadlines for those benefits because of their addiction as recovering alcoholics. Their claim that alcoholism is a disease beyond a drinker’s control was endorsed by the American Medical Association and the American Psychiatric Association. But it was rejected by the court, which ruled that certain types of alcohol abuse resulted from deliberate misconduct. Much of Professor Fingarette’s research and writing concerned accountability. That included what he called the self-deception, validated by science, that alcoholics cannot help themselves. In “Heavy Drinking: The Myth of Alcoholism as a Disease” (1988), Professor Fingarette all but accused the treatment industry of conspiring to profit from the conventional theory that alcoholism is a disease. He maintained that heavy use of alcohol is a “way of life,” that many heavy drinkers can choose to reduce their drinking to moderate levels, and that most definitions of the word “alcoholic” are phony. © 2018 The New York Times Company

Keyword: Drug Abuse; Attention
Link ID: 25688 - Posted: 11.16.2018

By Alice Robb One muggy Saturday last summer, I went on a date with a man who seemed entirely fine. We drank two beers and went for a walk, and he explained why he liked certain buildings that we passed. We kissed, and his breath tasted like cigarettes. We parted ways, and I couldn’t muster the energy to answer his emoji-laden follow-up texts about my weekend activities. The date was mediocre at best — but in the days that followed, I second-guessed my decision not to see him again. Maybe I had written him off too soon; maybe I should have given things a chance to develop. After all, he had some good qualities. He was handsome, tall, employed — and not, refreshingly, as a writer. It was only after a painfully on-the-nose dream a few weeks later that I stopped doubting my intuition. In the dream, I had agreed to a second date, and I had brought along two friends to observe our interactions and help me assess him. At the end of the group outing, my friends pulled me away and offered a unanimous decision: He wasn’t for me. I had made the right call. By the time we reach adulthood, most of us have accepted the conventional wisdom: We shouldn’t dwell on our dreams. Even though research suggests that REM sleep — when most dreaming takes place — is crucial for mental and physical health, we think of dreams as silly little stories, the dandruff of the brain. We’re taught that talking about our dreams is juvenile, self-indulgent, and that we should shake off their traces and get on with our day. It doesn’t have to be that way. For the past two years, a group of my friends has been gathering every month to talk about dreams; we do it for fun. Even if we resist, dreams have a way of sneaking into conscious territory and influencing our daytime mood. In three years of reporting on the science behind dreams, I’ve heard strangers describe flying, tooth loss, reunions with the dead — all the classics. I’ve seen that a dream can be a fascinating window into another person’s private life, and I’ve learned that paying attention to dreams can help us understand ourselves. © 2018 The New York Times Company

Keyword: Sleep; Attention
Link ID: 25672 - Posted: 11.12.2018

Tam Hunt Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium? These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years. The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.” Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter. Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally. It sounds like something the hippies might have dreamed up – it’s all vibrations, man! – but stick with me. How do things in nature – like flashing fireflies – spontaneously synchronize? © 2010–2018, The Conversation US, Inc.

Keyword: Consciousness
Link ID: 25665 - Posted: 11.10.2018

By Sam Rose One of neuroscience’s foundational experiments wasn’t performed in a Nobel laureate’s lab, but occurred in a railyard in 1848 when an accidental explosion sent a tamping iron through 25 year-old Phineas Gage’s forehead. Gage survived, but those studying his history detailed distinct personality changes resulting from the accident. He went from even-tempered to impulsive and profane. The case is likely the earliest—and most famous—of using a “lesion” to link a damaged brain region to its function. In the ensuing decades, to study the brain was to study lesions. Lesion cases fed most of the era’s knowledge of the brain. One might think that modern neuroscience, with its immense toolkit of experimental techniques, no longer needs lesions like Gage’s to parse the brain’s inner workings. Lesion studies, though, seem to be having a revival. A new method called lesion network mapping is clearing the cobwebs off the lesion study and uniting it with modern brain connectivity data. The results are revealing surprising associations between brain regions and disorders. Thankfully, most lesions aren’t a tamping iron through the forehead. Strokes, hemorrhages, or tumors make up most lesion cases. 19th century neurologists like Paul Broca made foundational discoveries by studying patients with peculiar symptoms resulting from these common neurological insults. Broca and his contemporaries synthesized a theory of the brain from lesions: that the brain is segmented. Different regions control different functions. Lesion studies lend a lawyerly logic to the brain: if region X is destroyed and function Y no longer occurs, then region X must control function Y. Advertisement © 2018 Scientific American,

Keyword: Stroke
Link ID: 25626 - Posted: 10.31.2018

Jon Hamilton An ancient part of the brain long ignored by the scientific world appears to play a critical role in everything from language and emotions to daily planning. It's the cerebellum, which is found in fish and lizards as well as people. But in the human brain, this structure is wired to areas involved in higher-order thinking, a team led by researchers from Washington University in St. Louis reports Thursday in the journal Neuron. "We think that the cerebellum is acting as the brain's ultimate quality control unit," says Scott Marek, a postdoctoral research scholar and the study's first author. The finding adds to the growing evidence that the cerebellum "isn't only involved in sensory-motor function, it's involved in everything we do," says Dr. Jeremy Schmahmann, a neurology professor at Harvard and director of the ataxia unit at Massachusetts General Hospital. Schmahmann, who wasn't involved in the new study, has been arguing for decades that the cerebellum plays a key role in many aspects of human behavior, as well as mental disorders such as schizophrenia. But only a handful of scientists have explored functions of the cerebellum beyond motor control. "It's been woefully understudied," says Dr. Nico Dosenbach, a professor of neurology at Washington University whose lab conducted the study. Even now, many scientists think of the cerebellum as the part of the brain that lets you pass a roadside sobriety test. It helps you do things like walk in a straight line or stand on one leg or track a moving object — if you're not drunk. © 2018 npr

Keyword: Attention; Language
Link ID: 25624 - Posted: 10.27.2018

By Anil Seth, Michael Schartner, Enzo Tagliazucchi, Suresh Muthukumaraswamy, Robin Carhart-Harris, Adam Barrett It’s not easy to strike the right balance when taking new scientific findings to a wider audience. In a recent opinion piece, Bernard Kastrup and Edward F. Kelly point out that media reporting can fuel misleading interpretations through oversimplification, sometimes abetted by the scientists themselves. Media misinterpretations can be particularly contagious for research areas likely to pique public interest—such as the exciting new investigations of the brain basis of altered conscious experience induced by psychedelic drugs. Unfortunately, Kastrup and Kelly fall foul of their own critique by misconstruing and oversimplifying the details of the studies they discuss. This leads them towards an anti-materialistic view of consciousness that has nothing to do with the details of the experimental studies—ours or others. Take, for example, their discussion of our recent study reporting increased neuronal “signal diversity” in the psychedelic state. In this study, we used “Lempel-Ziv” complexity—a standard algorithm used to compress data files—to measure the diversity of brain signals recorded using magnetoencephalography (MEG). Diversity in this sense is related to, though not entirely equivalent to, “randomness.” The data showed widespread increased neuronal signal diversity for three different psychedelics (LSD, psilocybin and ketamine), when compared to a placebo baseline. This was a striking result since previous studies using this measure had only reported reductions in signal diversity, in global states generally thought to mark “decreases” in consciousness, such as (non-REM) sleep and anesthesia. © 2018 Scientific American

Keyword: Consciousness; Drug Abuse
Link ID: 25617 - Posted: 10.26.2018

By Todd E. Feinberg, Jon Mallatt Consciousness seems mysterious. By this we mean that while life in general can be explained by physics, chemistry and biology, it seems that whenever one tries to explain the relationship between the brain and the subjective events that are experienced as feelings—what philosophers often refer to as “qualia”—something appears to be “left out” of the explanation. This apparent divide between the brain and subjective experience is what philosopher Joseph Levine famously called this the “explanatory gap,” and how to bridge that gap is what philosopher David Chalmers called the term “hard problem of consciousness.” We study primary consciousness, the most basic type of sensory experience. This is the ability to have any experience or feeling at all, what philosopher Thomas Nagel called “something it is like to be” in his famous 1974 paper “What is it like to be a bat?” Over the last few years, we have tried to “demystify” primary consciousness by combining neural and philosophical aspects of the problem into a unified view of how feelings are created in a naturally biological way. Our analysis leads us to the view that the puzzle of consciousness and the explanatory gap actually has two related aspects: an ontological aspect and an epistemic aspect and that both have a natural and scientific explanation. First, we consider the ontological aspect of the problem. This part of the puzzle entails what philosopher John Searle called the “ontological subjectivity” of consciousness. This is the idea that consciousness has a unique and fundamentally “first-person” ontology—or mode of being—in that feelings only exist when experienced by an animal subject. The implications of this view would be that no manner of objective scientific explanation, no matter how complete, would “explain away” the neurobiologically unique subjective feelings that are associated with certain brain states—in other words how things feel. The challenge here is to explain this unique aspect of feelings in a way that is consistent with an entirely scientific world view and do so without invoking any new or fundamentally “mysterious” physical principles. © 2018 Scientific American

Keyword: Consciousness
Link ID: 25585 - Posted: 10.17.2018

By Frankie Schembri Humans are awful at estimating a person’s age based on their face alone. This can lead not only to uncomfortable social situations, but also to critical errors in criminal investigations and enforcing age-based restrictions on such things as alcohol and gambling. New research shows people are usually off by about 8 years, and their estimate might be shaped by the last face they saw. To conduct the study, researchers collected 3968 pictures of consenting participants from the Australian Passport Office—31 men and 31 women at each age from 7 through 70. Then, they showed 81 people photographs of a man and woman at each age in a random sequence, and asked them to guess their ages. The faces above are computer-generated averages of more than 100 pictures from the study of people aged 19 to 22, 50 to 53, and 63 to 66. Volunteers consistently guessed that young faces were several years older than they actually were and that older faces were several years younger than they actually were, the team reports today in Royal Society Open Science. The results also showed that people’s estimates were affected by the previous face they had viewed—if they had just seen a young face, they usually lowballed the next face’s age, and vice versa. © 2018 American Association for the Advancement of Science

Keyword: Attention
Link ID: 25581 - Posted: 10.17.2018

By Frankie Schembri Think of all the faces you know. As you flick through your mental Rolodex, your friends, family, and co-workers probably come first—along with celebrities—followed by the faces of the nameless strangers you encounter during your daily routine. But how many faces can the human Rolodex store? To ballpark the size of the average person’s “facial vocabulary,” researchers gave 25 people 1 hour to list as many faces from their personal lives as possible, and then another hour to do the same with famous faces, like those of actors, politicians, and musicians. If the participants couldn’t remember a person’s name, but could imagine their face, they used a descriptive phrase like “the high school janitor,” or “the actress from Friends with the haircut.” People came up with lots of faces during the first minutes of the test, but the rate of remembrance dropped over the course of the hour. By graphing this relationship and extrapolating it to when most people would run out of faces, the researchers estimated the number of faces an average person can recall from memory. To figure out how many additional faces people recognized but were unable to recall without prompting, researchers showed the participants photographs of 3441 celebrities, including Barack Obama and Tom Cruise. To qualify as “knowing” a face, the participants had to recognize two different photos of each person. © 2018 American Association for the Advancement of Science

Keyword: Attention
Link ID: 25552 - Posted: 10.10.2018