Links for Keyword: Attention

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 559

Mariah Quintanilla Emma Watson, Jake Gyllenhaal, journalist Fiona Bruce and Barack Obama all walk into a sheep pen. No, this isn’t the beginning of a baaa-d joke. By training sheep using pictures of these celebrities, researchers from the University of Cambridge discovered that the animals are able to recognize familiar faces from 2-D images. Given a choice, the sheep picked the familiar celebrity’s face over an unfamiliar face the majority of the time, the researchers report November 8 in Royal Society Open Science. Even when a celeb’s face was slightly tilted rather than face-on, the sheep still picked the image more often than not. That means the sheep were not just memorizing images, demonstrating for the first time that sheep have advanced face-recognition capabilities similar to those of humans and other primates, say neurobiologist Jennifer Morton and her colleagues. Sheep have been known to pick out pictures of individuals in their flock, and even familiar handlers (SN: 10/6/12, p. 20). But it’s been unclear whether the skill was real recognition or simple memorization. Sheep now join other animals, including horses, dogs, rhesus macaques and mockingbirds, that are able to distinguish between individuals of other species. Over a series of four training sessions, the sheep’s ability to choose a familiar face, represented by one of the four celebrities, over a completely unfamiliar face improved. |© Society for Science & the Public 2000 - 2017.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 24306 - Posted: 11.08.2017

James Gorman Dogs have evolved to be friendly and tolerant of humans and one another, which might suggest they would be good at cooperative tasks. Wolves are known to cooperate in hunting and even in raising one another’s pups, but they can seem pretty intolerant of one another when they are snapping and growling around a kill. So researchers at the Wolf Science Center at the University of Vienna decided to compare the performance of wolves and dogs on a classic behavioral test. To get a food treat, two animals have to pull ropes attached to different ends of a tray. The trick is that they have to pull both ropes at the same time. Chimps, parrots, rooks and elephants have all succeeded at the task. When Sarah Marshall-Pescini, Friederike Range and colleagues put wolves and dogs to the test, wolves did very well and dogs very poorly. In recordings of the experiments, the pairs of wolves look like experts, while the dogs seem, well, adorable and confused. The researchers reported their findings in the Proceedings of the National Academy of Sciences. With no training, five of seven wolf pairs succeeded in mastering the task at least once. Only one of eight dog pairs did. © 2017 The New York Times Company

Related chapters from BN8e: Chapter 6: Evolution of the Brain and Behavior; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress
Link ID: 24304 - Posted: 11.08.2017

Molecular method reveals neuronal basis of brain states – NIH-funded animal study. NIMH-funded scientists revealed the types of neurons supporting alertness, using a molecular method called MultiMAP in transparent larval zebrafish. Multiple types of neurons communicate by secreting the same major chemical messengers: serotonin (red), dopamine and noradrenalin (yellow) and acetylcholine (cyan). Using a molecular method likely to become widely adopted by the field, researchers supported by the National Institutes of Health have discovered brain circuitry essential for alertness, or vigilance – and for brain states more generally. Strikingly, the same cell types and circuits are engaged during alertness in zebra fish and mice, species whose evolutionary forebears parted ways hundreds of millions of years ago. This suggests that the human brain is likely similarly wired for this state critical to survival. “Vigilance gone awry marks states such as mania and those seen in post-traumatic stress disorder and depression,” explained Joshua Gordon, M.D., Ph.D., director of the NIH’s National Institute of Mental Health (NIMH), which along with the National Institute on Drug Abuse, co-funded the study. “Gaining familiarity with the molecular players in a behavior – as this new tool promises – may someday lead to clinical interventions targeting dysfunctional brain states.” For the first time, Multi-MAP makes it possible to see which neurons are activated in a behaving animal during a particular brain state – and subsequently molecularly analyze just those neurons to identify the subtypes and circuits involved.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 24282 - Posted: 11.03.2017

By Helen Thomson Do you find it difficult to spot a face in the crowd? Now we know why: people with face blindness seem to have a missing “hub” of brain connections. The discovery could be used to diagnose children with the condition, and teach them new ways to identify faces. People with prosopagnosia, which often runs in families, cannot easily tell faces apart. This can have a significant impact on people’s lives. People with the condition rely heavily on voice recognition, clothes, hairstyle and gait to identify people, but can still fail to recognise family and friends. It can lead to social anxiety and depression, and can often go undiagnosed for many years. Face processing isn’t a function of a single brain region, but involves the coordinated activity of several regions. To investigate what might be causing the problem, Galia Avidan at Ben-Gurion University of the Negev, Israel, and her colleagues scanned the brains of 10 adults who have reported life-long problems with face processing. They also scanned 10 adults without the condition. During the scan, participants were shown sets of images of emotional, neutral, famous and unfamiliar faces. During the task they were asked to press a button when two consecutive images were identical. Some of the images also included buildings, which people with face blindness do not have any trouble identifying – these acted as a control. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 24281 - Posted: 11.03.2017

By GRETCHEN REYNOLDS Do brains trump brawn? A remarkable new study of how the human body prioritizes its inner workings found that if you intensely think at the same time as you intensely exercise, your performance in both thinking and moving can worsen. But your muscles’ performance will decline much more than your brain’s will, the study found. The results raise interesting questions about the roles that our body’s wide-ranging abilities may have played in the evolution of humans and also whether a hard workout is the ideal time to be cogitating. Compared to almost all other animals, we humans have disproportionately large brains for our size. Our supersized cranial contents probably provided an advantage during our evolution as a species. Smart creatures presumably could have outwitted predators and outmaneuvered prey, keeping themselves fed, uneaten and winners in the biological sweepstakes to pass on their genes. But most other species eschewed developing similarly outsized brains during evolution, because large brains carry a hefty metabolic cost. Brains are extraordinarily hungry organs, requiring, ounce for ounce, more calories to sustain their operations than almost any other tissue, and these caloric demands rise when the brain is hard at work. Thinking demands considerable bodily fuel. In order to feed and maintain these large brains, early humans’ bodies had to make certain trade-offs, most evolutionary biologists agree. Our digestive systems shrank during evolution, for one thing, since food processing is also metabolically ravenous. But whether a similar trade-off occurred with our muscles has remained in doubt. Muscles potentially provided another route to survival during our species’ early days. With sufficient brawn, animals, including people, could physically overpower prey and sprint from danger. © 2017 The New York Times Company

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 24243 - Posted: 10.26.2017

By Jessica Hamzelou Ever realised you have driven yourself home but haven’t really been paying attention? Brain scans have revealed that when your mind wanders, it switches into “autopilot” mode, enabling you to carry on doing tasks quickly, accurately and without conscious thought. Our autopilot mode seems to be run by a set of brain structures called the default mode network (DMN). It was discovered in the 1990s, when researchers noticed that people lying in brain scanners show patterns of brain activity even when they aren’t really doing anything. This research provided the first evidence that our brains are active even when we aren’t consciously putting our minds to work. But what does the DMN do? Several studies have found that it seems to be involved in assessing past events and planning for the future. Others suggest the network is involved in self-awareness – although this has been called into question by findings that rats and newborns appear to have a version of the DMN too. It is unlikely that rats are conscious of themselves in the same way that humans are, says Deniz Vatansever at the University of York, UK. Instead, the DMN must have a more basic function, common to all animals. Vatansever and his colleagues at the University of Cambridge wondered if the network might help us do things without paying much attention, such as tying our shoelaces, or driving along a familiar road. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 24240 - Posted: 10.25.2017

Nicola Davis When it comes to understanding how another person thinks and feels, it might be best to close your eyes and listen. A study by an American psychologist suggests that people are better able to pick up on the emotions of others when simply focusing on their voice, compared with both watching and listening to them, or just watching them. “Humans are actually remarkably good at using many of their senses for conveying emotions, but emotion research historically is focused almost exclusively on the facial expressions,” said Michael Kraus, a social psychologist at Yale University and author of the study. While combining information from a person’s voice with their facial expressions and other cues might at first seem like a way to boost understanding of their thoughts and feelings, Kraus says pooling the senses divides attention. What’s more, he notes, facial expressions can mask a person’s true feelings – something that he says is harder to do with the voice – while language plays a key role in how people understand and label their emotions. The upshot, he says, is that what people say, and the way they say it, offers the clearest insights into the emotions of others. “Listening matters,” said Kraus. “Actually considering what people are saying and the ways in which they say it can, I believe, lead to improved understanding of others at work or in your personal relationships.” © 2017 Guardian News and Media Limited

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 24173 - Posted: 10.11.2017

By HEATHER MURPHY Well done -- you are an atypical person. Usually people notice the other, smaller toothbrush first. Most people will quickly spot the toothbrush on the front of the counter, but take longer — or even fail to find — the much bigger one behind it. The oversight has to do with scale. People have a tendency to miss objects when their size is inconsistent with their surroundings, according to a recent study in Current Biology. This is just the latest in a robust body of research that reveals how expectations dramatically affect our ability to notice what’s around us. Though the image above was provided by the authors of the study to illuminate their point, the study was set up slightly differently. The researchers were interested not only in what people saw — but also in how their performance compared with computers. Flesh-and-blood participants and a deep neural network, a computer system with advanced machine vision, were given one second to select an object in a computer-rendered scene, such as the one below. The object could be absent, presented at scale or featured at four times scale. Is there a parking meter in this image? Once you know what to expect, of course, it's easier. In the study, the object was either absent, presented at scale or featured at four times scale. Humans missed giant objects about 13 percent more than normal-sized objects, the researchers found. Scale had no impact on machine performance. “We were surprised about how compelling of an effect it is,” said Miguel Eckstein, a psychologist at the University of California, Santa Barbara’s Vision and Image Understanding Laboratory and one of the authors. In particular, the first time a person examined a photo with a giant object, the object often seemed to be invisible. But it’s not a deficiency, he said: “This is a useful trick the brain does to rapidly process scenes and find what we are looking for.” © 2017 The New York Times Company

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 24161 - Posted: 10.07.2017

You may well be yawning just reading this - it's contagious. Now researchers have looked at what happens in our brains to trigger that response. A University of Nottingham team found it occurs in a part of the brain responsible for motor function. The primary motor cortex also plays a part in conditions such as Tourette's syndrome. So the scientists say understanding contagious yawning could also help understand those disorders too. Contagious yawning is a common form of echophenomena - the automatic imitation of someone else's words or actions. Echophenomena is also seen in Tourette's, as well as in other conditions, including epilepsy and autism. To test what's happening in the brain during the phenomenon, scientists monitored 36 volunteers while they watched others yawning. In the study, published in the journal Current Biology, some were told it was fine to yawn while others were told to stifle the urge. The urge to yawn was down to how each person's primary motor cortex worked - its "excitability". And, using external transcranial magnetic stimulation (TMS), it was also possible to increase "excitability" in the motor cortex and therefore people's propensity for contagious yawns. Georgina Jackson, professor of cognitive neuropsychology who worked on the study, said the finding could have wider uses: "In Tourette's, if we could reduce the excitability we might reduce the ticks, and that's what we are working on." Prof Stephen Jackson, who also worked on the research, added: "If we can understand how alterations in cortical excitability give rise to neural disorders we can potentially reverse them. "We are looking for potential non-drug, personalised treatments, using TMS that might be effective in modulating imbalances in the brain networks." © 2017 BBC

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 24022 - Posted: 09.01.2017

By Helen Thomson Have you ever seen the Virgin Mary in your grilled cheese? Or a screaming face inside a bell pepper? Seeing faces in inanimate objects is a common phenomenon. Now it seems that we’re not alone in experiencing it – monkeys do too. Pareidolia is the scientific term for erroneously perceiving faces where none exist. Other examples including seeing “ghosts” in blurry photos and the man in the moon. To investigate whether pareidolia was a uniquely human experience, Jessica Taubert at the US National Institute of Mental Health in Maryland and her colleagues trained five rhesus macaques to stare at pairs of photos. Each photo showed either an inanimate object that prompts pareidolia in humans, an equivalent object that doesn’t, or the face of a monkey (below). We already knew that both people and monkeys will look longer at images of faces than other things. So the team presented each of the photos in every possible pairing – 1980 in all – and measured the time the monkeys spent looking at each. The monkeys did indeed seem to succumb to pareidolia – they spent more time looking at illusory faces than the non-illusory photos they were paired with. Interestingly, they also spent more time looking at the illusory faces than the monkey faces, perhaps because they spent longer studying these more unusual “faces”, or because they tend to dislike holding the gaze of another monkey. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23997 - Posted: 08.25.2017

By Helen Thomson Our brains seem better at predictions than we are. A part of our brain becomes active when it knows something will be successfully crowdfunded, even if we consciously decide otherwise. If this finding stands up and works in other areas of life, neuroforecasting may lead to better voting polls or even predict changes in financial markets. To see if one can predict market behaviour by sampling a small number of people, Brian Knutson at Stanford University in California and his team scanned the brains of 30 people while they decided whether to fund 36 projects from the crowdfunding website Kickstarter. The projects were all recently posted proposals for documentary films. Each participant had their brain scanned while taking in the pictures and descriptions of each campaign, and they were then asked if they would want to fund the project. When the real Kickstarter campaigns ended a few weeks later, 18 of the projects had gained enough funding to go forward. Examining the participants’ brain scans, the team discovered that activity in a region called the nucleus accumbens had been different when they considered projects that later went on to be successful. Prediction paradox The team trained an algorithm to recognise these differences in brain activity using scan data from 80 per cent of the projects, then tested the program on the remaining 20 per cent. Using neural activity alone, the algorithm was able to forecast which Kickstarter campaigns would be funded with 59.1 per cent accuracy – more than would be expected by chance. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23984 - Posted: 08.22.2017

By Abby Olena Our brains quickly characterize everything we see as familiar or new, and scientists have been investigating this connection between vision and cognition for years. Now, research in Japanese macaques (Macaca fuscata) reveals that the activation of neurons in a part of the primate brain called the perirhinal cortex can cause monkeys to recognize new objects as familiar and vice versa. The study was published today (August 17) in Science. “There are a lot of really exciting aspects to this paper,” says neuroscientist David Sheinberg of Brown University, who did not participate in the work. “This group continues to make advances that are helping us understand how we convert visual impressions into things we know.” Primate brains process visual information through several brain structures that make up the ventral visual stream. The last stop in this stream is the perirhinal cortex, part of the medial temporal lobe. Scientists know that this brain structure plays roles in visual memory and object discrimination. But one open question is whether the perirhinal cortex represents objects’ physical traits or whether it might also communicate information about nonphysical attributes, such as whether an object has been seen before. “In the primate, the perirhinal cortex is the link between the visual pathway and the limbic memory system,” coauthor and University of Tokyo neuroscientist Yasushi Miyashita writes in an email to The Scientist. “Therefore, the perirhinal cortex is one of the most likely candidates in the brain where visual information is transformed to subjective semantic values by referring to one’s own memory.” © 1986-2017 The Scientist

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23979 - Posted: 08.19.2017

By Aylin Woodward Two newly identified brain areas in rhesus monkeys seem to help the animals recognise familiar faces. Primates, Homo sapiens included, must be able to differentiate between faces and recognise friend from foe because social hierarchies play a large role in daily life. But exactly how primate brains deal with faces is not completely clear. One idea is that the same parts of the brain are involved in recognising both familiar and unfamiliar faces, just with varying efficiency. But Sofia Landi and Winrich Freiwald at Rockefeller University in New York have now cast doubt on that thinking. Their work shows that distinct brain areas are responsible for recognising the primates you know. Many researchers have already shown that certain areas of the temporal and prefrontal cortex are involved in unfamiliar face perception in rhesus monkey brains. Using whole-brain fMRI scans of four monkeys, Landi and Freiwald have now identified two additional brain areas that play a role not only in unfamiliar face perception but also in recognising familiar faces. The two new areas are in the anterior temporal lobe – the part of our brains above and in front of our ears. One is in the perirhinal cortex and one is in the temporal pole. These regions lit up far more when the monkeys recognised a familiar face in a photograph, as opposed to when they were presented with images of a stranger. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23949 - Posted: 08.11.2017

By Amanda Onion, While driving and accelerating in his car, a patient in France suddenly had a bizarre sensation. He felt like he was outside his car, looking in at his physical self, which was still at the wheel. The patient was part of a new study that links problems of the inner ear with eerie "out-of-body" experiences. These experiences arecurious, usually brief sensations in which a person's consciousness seems to exitthe body and then view the body from the outside. The study analyzed 210 patients who had visited their doctors with so-called vestibular disorders. The vestibular system, which is made up of several structures in the inner ear, provides the body with a sense of balance and spatial orientation. Problems with this system can cause dizziness or a floating sensation, among other symptoms. [7 Weird Facts About Balance] Maya Elzière, an ear, nose and throat specialist at Hôpital Européen in Marseille, France, and co-author of the study, enlisted patients who had experienced a range of issues, from recurrent vertigo and tinnitus to infections in the ear. Among these patients, 14 percent reported out-of-body experiences, compared with only 5 percent of healthy people without vestibular disorders who said the same. "Out-of-body experiences were about three times more frequent" in patients with vestibular disorders, versus those without these disorders, said Christophe Lopez, lead author of the study and a neuroscientist at Aix-Marseille Université in France. © 2017 Scientific American,

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Consciousness
Link ID: 23922 - Posted: 08.07.2017

By Knvul Sheikh The brain has evolved to recognize and remember many different faces. We can instantly identify a friend's countenance among dozens in a crowded restaurant or on a busy street. And a brief glance tells us whether that person is excited or angry, happy or sad. Brain-imaging studies have revealed that several blueberry-size regions in the temporal lobe—the area under the temple—specialize in responding to faces. Neuroscientists call these areas “face patches.” But neither brain scans nor clinical studies of patients with implanted electrodes explained exactly how the cells in these patches work. Now, using a combination of brain imaging and single-neuron recording in macaques, biologist Doris Tsao and her colleagues at the California Institute of Technology appear to have finally cracked the neural code for primate face recognition. The researchers found the firing rate of each face patch cell corresponds to a separate facial feature. Like a set of dials, the cells can be fine-tuned to respond to bits of information, which they can then combine in various ways to create an image of every face the animal encounters. “This was mind-blowing,” Tsao says. “The values of each dial are so predictable that we can re-create the face that a monkey sees by simply tracking the electrical activity of its face cells.” Previous studies had hinted at the specificity of these brain areas for encoding faces. In the early 2000s, when Tsao was a postdoctoral researcher at Harvard Medical School, she and electrophysiologist Winrich Freiwald showed that neurons in a monkey's face patches would fire electrical signals every time the animal saw pictures of a face. But the same brain cells showed little or no response to other objects, such as images of vegetables, radios or nonfacial body parts. Other experiments indicated that neurons in these regions could also distinguish among individual faces, even if they were cartoons. © 2017 Scientific American

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23913 - Posted: 08.03.2017

By Karl Gruber Are you good with faces? So is the Japanese rice fish – at least, it is if the faces are the right way up. Just like humans, the tiny fish has no problem recognising faces orientated the usual way, but, again like us, it struggles when they are inverted. The finding indicates that the fish may have developed a unique brain pathway for face recognition, just as humans have. We have no problem identifying most objects in our environment – say, a chair – no matter what way up they are. But faces are different. It is relatively easy for us to spot the differences between two faces, even if they are physically similar, if we see them in photographs the right way up. But if the images are upside down, telling them apart gets a bit tricky. “This is because we have a specific brain area for processing faces, and when the face is upside down, we process the image through object processing pathways, and not the face-processing pathways any more,” says Mu-Yun Wang at the University of Tokyo, Japan. Until now, this face-inversion effect was considered exclusive to mammals as it has only been observed in primates and sheep. Enter the Japanese rice fish, also known as the medaka (Oryzias latipes), a 3.5-centimetre-long shoaling fish commonly found in rice paddies, marshes, ponds and slow-moving streams in East Asia. These fish are very social, so identifying the right individuals to associate with is important. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23891 - Posted: 07.28.2017

Sarah Zhang In 1958, Robert Monroe floated out of his body for the first time. It began “without any apparent cause,” he wrote. His doctor, finding no physical ailment, prescribed tranquilizers. A psychologist friend, meanwhile, told me him to try leaving his body again. After all, the friend said, “some of the fellows who practice yoga and those Eastern religions claim they can do it whenever they want to.” Monroe did try it again—and again and again. He recalls these experiences in his classic 1971 book Journeys out of the Body, which launched the phrase “out-of-body experiences” into the public conversation. Monroe died in 1995, but the fascination with out-of-body experiences endures. Out-of-body experience can vary person to person, but they often involve the sense of floating above one’s actual body and looking down. For neuroscientists, the phenomenon is a puzzle and an opportunity: Understanding how the brain goes awry can also illuminate how it is supposed to work. Neuroscientists now think that out-of-body experiences involve the vestibular system—made up of canals in the inner ear that track a person’s locations in space—and how that information gets integrated with other senses in the brain. In a recent study from France, Christophe Lopez, a neuroscientist at Aix-Marseille Université, teamed up with Maya Elzière, a doctor who sees patient with vestibular disorders. Some of these patients complained of dizziness, with physical causes that ranged from fluid leaking out of the inner ear to an infection of a nearby nerve. Of 210 patients who reported dizziness, 14 percent said they have had out-of-body experiences. In contrast, only 5 percent of healthy participants in the study reported such sensations. © 2017 by The Atlantic Monthly Group

Related chapters from BN8e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 14: Attention and Consciousness
Link ID: 23884 - Posted: 07.27.2017

By CLAY ROUTLEDGE Are Americans becoming less religious? It depends on what you mean by “religious.” Polls certainly indicate a decline in religious affiliation, practice and belief. Just a couple of decades ago, about 95 percent of Americans reported belonging to a religious group. This number is now around 75 percent. And far fewer are actively religious: The percentage of regular churchgoers may be as low as 15 to 20 percent. As for religious belief, the Pew Research Center found that from 2007 to 2014 the percentage of Americans who reported being absolutely confident God exists dropped from 71 percent to 63 percent. Nonetheless, there is reason to doubt the death of religion, or at least the death of what you might call the “religious mind” — our concern with existential questions and our search for meaning. A growing body of research suggests that the evidence for a decline in traditional religious belief, identity and practice does not reflect a decline in this underlying spiritual inclination. Ask yourself: Why are people religious to begin with? One view is that religion is an ancient way of understanding and organizing the world that persists largely because societies pass it down from generation to generation. This view is related to the idea that the rise of science entails the fall of religion. It also assumes that the strength of religion is best measured by how much doctrine people accept and how observant they are. This view, however, does not capture the fundamental nature of the religious mind — our awareness of, and need to reckon with, the transience and fragility of our existence, and how small and unimportant we seem to be in the grand scheme of things. In short: our quest for significance. © 2017 The New York Times Company

Related chapters from BN8e: Chapter 19: Language and Lateralization; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 15: Brain Asymmetry, Spatial Cognition, and Language; Chapter 11: Emotions, Aggression, and Stress
Link ID: 23868 - Posted: 07.24.2017

by Helen Thompson Paper wasps have a knack for recognizing faces, and a new study adds to our understanding of what that means in a wasp’s brain. Most wasps of a given species look the same, but some species of paper wasp (Polistes sp.) display varied colors and markings. Recognizing these patterns is at the core of the wasps’ social interactions. One species, Polistes fuscatus, is especially good at detecting differences in faces — even better than they are at detecting other patterns. To zero on the roots of this ability, biologist Ali Berens of Georgia Tech and her colleagues set up recognition exercises of faces and basic patterns for P. fuscatus wasps and P. metricus wasps — a species that doesn’t naturally recognize faces but can be trained to do so in the lab. After the training, scientists extracted DNA from the wasps’ brains and looked at which bits of DNA or genes were active. The researchers found 237 genes that were at play only in P. fuscatus during facial recognition tests. A few of the genes have been linked to honeybee visual learning, and some correspond to brain signaling with the neurotransmitters serotonin and tachykinin. In the brain, picking up on faces goes beyond basic pattern learning, the researchers conclude June 14 in the Journal of Experimental Biology. It’s possible that some of the same genes also play a broader role in how organisms such as humans and sheep tell one face from another. © Society for Science & the Public 2000 - 2017

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23742 - Posted: 06.15.2017

Laurel Hamers A monkey’s brain builds a picture of a human face somewhat like a Mr. Potato Head — piecing it together bit by bit. The code that a monkey’s brain uses to represent faces relies not on groups of nerve cells tuned to specific faces — as has been previously proposed — but on a population of about 200 cells that code for different sets of facial characteristics. Added together, the information contributed by each nerve cell lets the brain efficiently capture any face, researchers report June 1 in Cell. “It’s a turning point in neuroscience — a major breakthrough,” says Rodrigo Quian Quiroga, a neuroscientist at the University of Leicester in England who wasn’t part of the work. “It’s a very simple mechanism to explain something as complex as recognizing faces.” Until now, Quiroga says, the leading explanation for the way the primate brain recognizes faces proposed that individual nerve cells, or neurons, respond to certain types of faces (SN: 6/25/05, p. 406). A system like that might work for the few dozen people with whom you regularly interact. But accounting for all of the peripheral people encountered in a lifetime would require a lot of neurons. It now seems that the brain might have a more efficient strategy, says Doris Tsao, a neuroscientist at Caltech. Tsao and coauthor Le Chang used statistical analyses to identify 50 variables that accounted for the greatest differences between 200 face photos. Those variables represented somewhat complex changes in the face — for instance, the hairline rising while the face becomes wider and the eyes becomes further-set. |© Society for Science & the Public 2000 - 2017.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23701 - Posted: 06.02.2017