Links for Keyword: Attention

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 519

By JOANNA KLEIN The good news is, the human brain is flexible and efficient. This helps us make sense of the world. But the bad news is, the human brain is flexible and efficient. This means the brain can sometimes make mistakes. You can watch this tension play out when the brain tries to connect auditory and visual speech. It’s why we may find a poorly dubbed kung fu movie hard to believe, and why we love believing the gibberish in those Bad Lip Reading Videos on YouTube. “By dubbing speech that is reasonably consistent with the available mouth movements, we can utterly change the meaning of what the original talker was saying,” said John Magnotti, a neuroscientist at Baylor College of Medicine in Texas. “Sometimes we can detect that something is a little off, but the illusion is usually quite compelling.” In a study published Thursday in PLOS Computational Biology, Dr. Magnotti and Michael Beauchamp, also a neuroscientist at Baylor College of Medicine, tried to pin down why our brains are susceptible to these kinds of perceptual mistakes by looking at a well-known speech illusion called the McGurk effect. By comparing mathematical models for how the brain integrates senses important in detecting speech, they found that the brain uses vision, hearing and experience when making sense of speech. If the mouth and voice are likely to come from the same person, the brain combines them; otherwise, they are kept separate. “You may think that when you’re talking to someone you’re just listening to their voice,” said Dr. Beauchamp, who led the study. “But it turns out that what their face is doing is actually profoundly influencing what you are perceiving.” © 2017 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23261 - Posted: 02.21.2017

By Sam Wong Can a mouse be mindful? Researchers believe they have created the world’s first mouse model of meditation by using light to trigger brain activity similar to what meditation induces. The mice involved appeared less anxious, too. Human experiments show that meditation reduces anxiety, lowers levels of stress hormones and improves attention and cognition. In one study of the effects of two to four weeks of meditation training, Michael Posner of the University of Oregon and colleagues discovered changes in the white matter in volunteers’ brains, related to the efficiency of communication between different brain regions. The changes, picked up in scans, were particularly noticeable between the anterior cingulate cortex (ACC) and other areas. Since the ACC regulates activity in the amygdala, a region that controls fearful responses, Posner’s team concluded that the changes in white matter could be responsible for meditation’s effects on anxiety. The mystery was how meditation could alter the white matter in this way. Posner’s team figured that it was related to changes in theta brainwaves, measured using electrodes on the scalp. Meditation increases theta wave activity, even when people are no longer meditating. To test the theory, the team used optogenetics – they genetically engineered certain cells to be switched on by light. In this way, they were able to use pulses of light on mice to stimulate theta brainwave-like activity in the ACC. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23260 - Posted: 02.21.2017

by Bethany Brookshire Gender bias works in subtle ways, even in the scientific process. The latest illustration of that: Scientists recommend women less often than men as reviewers for scientific papers, a new analysis shows. That seemingly minor oversight is yet another missed opportunity for women that might end up having an impact on hiring, promotions and more. Peer review is one of the bricks in the foundation supporting science. A researcher’s results don’t get published in a journal until they successfully pass through a gauntlet of scientific peers, who scrutinize the paper for faulty findings, gaps in logic or less-than-meticulous methods. The scientist submitting the paper gets to suggest names for those potential reviewers. Scientific journal editors may contact some of the recommended scientists, and then reach out to a few more. But peer review isn’t just about the paper (and scientist) being examined. Being the one doing the reviewing “has a number of really positive benefits,” says Brooks Hanson, an earth scientist and director of publications at the American Geophysical Union in Washington, D.C. “You read papers differently as a reviewer than you do as a reader or author. You look at issues differently. It’s a learning experience in how to write papers and how to present research.” Serving as a peer reviewer can also be a networking tool for scientific collaborations, as reviewers seek out authors whose work they admired. And of course, scientists put the journals they review for on their resumes when they apply for faculty positions, research grants and awards. |© Society for Science & the Public 2000 - 2017.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 8: Hormones and Sex
Link ID: 23185 - Posted: 02.04.2017

Noah Charney The Chinese government just arrested a group of people associated with a sham tourist attraction that had lured hundreds of sight-seers to a fake Terracotta Warriors exhibit, comprised entirely of modern replicas. Sotheby’s recently hired Jamie Martin of Orion Analytical, a forensic specialist at testing art, who then discovered that a Parmigianino painting recently sold is actually a modern forgery (Sotheby’s returned the buyer’s money and then sued the person for whom they sold it). And the Ringling Museum in Sarasota, Florida, is hoping that a painting of Philip IV of Spain in their collection will be definitively determined to be by Velazquez, and not a copy in the style of Velazquez. And that’s just in the last week or so. Art forgery and authenticity seems to be in the news just about every week (to my publicist’s delight). But I’m on a bit of a brainstorm. After my interview with Nobel Prize winner Dr. Eric Kandel on the neuroscience behind how we humans understand art, I’ve developed a keen interest in art and the mind. I tackled selfies, self-portraits and facial recognition recently, as well as what happens when the brain fails to function properly and neglects to recognize the value of art. Since my last book was a history of forgery, it was perhaps inevitable that I would wonder about the neurology of the recognition of originals versus copies. But while I looked into forgery from a wide variety of angles for the book, neuroscience was not one of them. © 2017 Salon Media Group, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 23166 - Posted: 01.30.2017

By ADAM BEAR and JOSHUA KNOBE What’s normal? Perhaps the answer seems obvious: What’s normal is what’s typical — what is average. But in a recent paper in the journal Cognition, we argue that the situation is more complicated than that. After conducting a series of experiments that examined how people decide whether something is normal or not, we found that when people think about what is normal, they combine their sense of what is typical with their sense of what is ideal. Normal, in other words, turns out to be a blend of statistical and moral notions. Our key finding can be illustrated with a simple example. Ask yourself, “What is the average number of hours of TV that people watch in a day?” Then ask yourself a question that might seem very similar: “What is the normal number of hours of TV for a person to watch in a day?” If you are like most of our experimental participants, you will not give the same answer to the second question that you give to the first. Our participants said the “average” number was about four hours and the “normal” number was about three hours. In addition, they said that the “ideal” number was about 2.5 hours. This has an interesting implication. It suggests that people’s conception of the normal deviates from the average in the direction of what they think ought to be so. Our studies found this same pattern in numerous other cases: the normal grandmother, the normal salad, the normal number of students to be bullied in a middle school. Again and again, our participants did not take the normal to be the same as the average. Instead, what people picked out as the “normal thing to do” or a “normal such-and-such” tended to be intermediate between what they thought was typical and what they thought was ideal. © 2017 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23165 - Posted: 01.30.2017

By Carl Bialik A woman has never come closer to the presidency than Hillary Clinton did in winning the popular vote in November. Yet as women march in Washington on Saturday, many of them to protest the presidency of Donald Trump, an important obstacle to the first woman president remains: the hidden, internalized bias many people hold against career advancement by women. And perhaps surprisingly, there is evidence that women hold more of this bias, on average, than men do. There has been lots of discussion of the role that overt sexism played in both Trump’s campaign and at the ballot box. A YouGov survey conducted two weeks before the election, for example, found that Trump voters had much higher levels of sexism, on average, than Clinton voters, as measured by their level of agreement with statements such as “women seek to gain power by getting control over men.” An analysis of the survey found that sexism played a big role in explaining people’s votes, after controlling for other factors, including gender and political ideology. Other research has reached similar conclusions. Two recent studies of voters, however, suggest that another, subtler form of bias may also have been a factor in the election. These studies looked at what’s known as “implicit bias,” the unconscious tendency to associate certain qualities with certain groups — in this case, the tendency to associate men with careers and women with family. Researchers have found that this kind of bias is stronger on average in women than in men, and, among women, it is particularly strong among political conservatives. And at least according to one study, this unconscious bias was especially strong among one group in 2016: women who supported Trump.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23134 - Posted: 01.23.2017

By Jordan Axt Imagine playing a game where you’re seated in front of four decks of cards. On the back of two decks are pictures of puppies; on the other two are pictures of spiders. Each deck has some cards that win points and others that lose points. In general, the puppy decks are “good” in that they win you more points than they lose while the spider decks are “bad” in that they lose you more points they win. You repeatedly select cards in hopes of winning as many points as possible. This game seems pretty easy— and it is. Most players favor the puppy decks from the start and quickly learn to continue favoring them because they produce more points. However, if the pictures on the decks are reversed, the game becomes a little harder. People may have a tougher time initially favoring spider decks because it’s difficult to learn that something people fear like spiders brings positive outcomes and something people enjoy like puppies brings negative outcomes. Performance on this learning task is best when one’s attitudes and motivations are aligned. For instance, when puppies earn you more points than spiders, people’s preference for puppies can lead people to select more puppies initially, and a motivation to earn as many points as possible leads people to select more and more puppies over time. But when spiders earn you more points than spiders, people have to overcome their initial aversion to spiders in order to perform well. © 2017 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23130 - Posted: 01.21.2017

By Rachael Lallensack A video game is helping researchers learn more about how tiny European starlings keep predators at bay. Their massive flocks, consisting of hundreds to thousands of birds, fly together in a mesmerizing, pulsating pattern called a murmuration. For a long time, researchers have suspected that the bigger the flock, the harder it is for predators like falcons and hawks to take down any one member, something known as “confusion effect.” Now, researchers have analyzed that effect—in human hunters. Using the first 3D computer program to simulate a murmuration, scientists tested how well 25 players, acting as flying predators, could target and pursue virtual starlings, whose movements were simulated based on data from real starling flocks (see video above). The team’s findings reaffirmed the confusion effect: The larger the simulated flocks, the harder it was for the “predators” to single out and catch individual prey, the researchers report this week in Royal Society Open Science. So maybe sometimes, it’s not so bad to get lost in a crowd. © 2017 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 14: Attention and Consciousness
Link ID: 23115 - Posted: 01.18.2017

By Alan Burdick Some nights—more than I like, lately—I wake to the sound of the bedside clock. The room is dark, without detail, and it expands in such a way that it seems as if I’m outdoors, under an empty sky, or underground, in a cavern. I might be falling through space. I might be dreaming. I could be dead. Only the clock moves, its tick steady, unhurried. At these moments I have the most chilling understanding that time moves in only one direction. I’m tempted to look at the clock, but I already know that it’s the same time it always is: 4 A.M., or 4:10 A.M., or once, for a disconcerting stretch of days, 4:27 A.M. Even without looking, I could deduce the time from the ping of the bedroom radiator gathering steam in winter or the infrequency of the cars passing by on the street outside. In 1917, the psychologist Edwin G. Boring and his wife, Lucy, described an experiment in which they woke people at intervals to see if they knew what time it was; the average estimate was accurate to within fifty minutes, although almost everyone thought it was later than it actually was. They found that subjects were relying on internal or external signals: their degree of sleepiness or indigestion (“The dark brown taste in your mouth is never bad when you have been asleep only a short time”), the moonlight, “bladder cues,” the sounds of cars or roosters. “When a man is asleep, he has in a circle round him the chain of the hours, the sequence of the years, the order of the heavenly bodies,” Proust wrote. “Instinctively he consults them when he awakes, and in an instant reads off his own position on the earth’s surface and the time that has elapsed during his slumbers.” © 2017 Condé Nast.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 23109 - Posted: 01.16.2017

By Maggie Koerth-Baker “The president can’t have a conflict of interest,” Donald Trump told The New York Times in November. He appears to have meant that in the legal sense — the president isn’t bound by the same conflict-of-interest laws that loom over other executive branch officials and employees.1 But that doesn’t mean the president’s interests can’t be in conflict. When he takes office Jan. 20, Trump will be tangled in a wide array of situations in which his personal connections and business coffers are pulling him in one direction while the interests of the American presidency and people pull him in another. For example, Trump is the president of a vineyard in Virginia that’s requesting foreign worker visas from the government he’ll soon lead. He’s also involved in an ongoing business partnership with the Philippines’ diplomatic trade envoy — a relationship that could predispose Trump to accepting deals that are more favorable to that country than he otherwise might. Once he’s in office, he will appoint some members of the labor board that could hear disputes related to his hotels. Neither Trump nor his transition team replied to interview requests for this article, but his comments to the Times suggest that he genuinely believes he can be objective and put the country first, despite financial and social pressures to do otherwise. Unfortunately, science says he’s probably wrong.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23108 - Posted: 01.16.2017

By Victoria Gill Science reporter, BBC News Direct recordings have revealed what is happening in our brains as we make sense of speech in a noisy room. Focusing on one conversation in a loud, distracting environment is called "the cocktail party effect". It is a common festive phenomenon and of interest to researchers seeking to improve speech recognition technology. Neuroscientists recorded from people's brains during a test that recreated the moment when unintelligible speech suddenly makes sense. A team measured people's brain activity as the words of a previously unintelligible sentence suddenly became clear when a subject was told the meaning of the "garbled speech". The findings are published in the journal Nature Communications. Lead researcher Christopher Holdgraf from the University of California, Berkeley, and his colleagues were able to work with epilepsy patients, who had had a portion of their skull removed and electrodes placed on the brain surface to track their seizures. First, the researchers played a very distorted, garbled sentence to each subject, which almost no-one was able to understand. They then played a normal, easy to understand version of the same sentence and immediately repeated the garbled version. "After hearing the intact sentence" the researchers explained in their paper, all the subjects understood the subsequent "noisy version". The brain recordings showed this moment of recognition as brain activity patterns in the areas of the brain that are known to be associated with processing sound and understanding speech. When the subjects heard the very garbled sentence, the scientists reported that they saw little activity in those parts of the brain. Hearing the clearly understandable sentence then triggered patterns of activity in those brain areas. © 2016 BBC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23004 - Posted: 12.22.2016

Answer by Paul King, Director of Data Science, on Quora: There are hundreds of surprising, perspective-shifting insights about the nature of reality that come from neuroscience. Every bizarre neurological syndrome, every visual illusion, and every clever psychological experiment reveals something entirely unexpected about our experience of the world that we take for granted. Here are a few to give a flavor: 1. Perceptual reality is entirely generated by our brain. We hear voices and meaning from air pressure waves. We see colors and objects, yet our brain only receives signals about reflected photons. The objects we perceive are a construct of the brain, which is why optical illusions can fool the brain. Recommended by Forbes 2. We see the world in narrow disjoint fragments. We think we see the whole world, but we are looking through a narrow visual portal onto a small region of space. You have to move your eyes when you read because most of the page is blurry. We don't see this, because as soon as we become curious about part of the world, our eyes move there to fill in the detail before we see it was missing. While our eyes are in motion, we should see a blank blur, but our brain edits this out. 3. Body image is dynamic and flexible. Our brain can be fooled into thinking a rubber arm or a virtual reality hand is actually a part of our body. In one syndrome, people believe one of their limbs does not belong to them. One man thought a cadaver limb had been sewn onto his body as a practical joke by doctors. 4. Our behavior is mostly automatic, even though we think we are controlling it.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22980 - Posted: 12.13.2016

By DANIEL A. YUDKIN and JAY VAN BAVEL During the first presidential debate, Hillary Clinton argued that “implicit bias is a problem for everyone, not just police.” Her comment moved to the forefront of public conversation an issue that scientists have been studying for decades: namely, that even well-meaning people frequently harbor hidden prejudices against members of other racial groups. Studies have shown that these subtle biases are widespread and associated with discrimination in legal, economic and organizational settings. Critics of this notion, however, protest what they see as a character smear — a suggestion that everybody, deep down, is racist. Vice President-elect Mike Pence has said that an “accusation of implicit bias” in cases where a white police officer shoots a black civilian serves to “demean law enforcement.” Writing in National Review, David French claimed that the concept of implicit bias lets people “indict entire communities as bigoted.” But implicit bias is not about bigotry per se. As new research from our laboratory suggests, implicit bias is grounded in a basic human tendency to divide the social world into groups. In other words, what may appear as an example of tacit racism may actually be a manifestation of a broader propensity to think in terms of “us versus them” — a prejudice that can apply, say, to fans of a different sports team. This doesn’t make the effects of implicit bias any less worrisome, but it does mean people should be less defensive about it. Furthermore, our research gives cause for optimism: Implicit bias can be overcome with rational deliberation. In a series of experiments whose results were published in The Journal of Experimental Psychology: General, we set out to determine how severely people would punish someone for stealing. Our interest was in whether a perpetrator’s membership in a particular group would influence the severity of the punishment he or she received. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 22979 - Posted: 12.12.2016

Rosie Mestel The 2016 US election was a powerful reminder that beliefs tend to come in packages: socialized medicine is bad, gun ownership is a fundamental right, and climate change is a myth — or the other way around. Stances that may seem unrelated can cluster because they have become powerful symbols of membership of a group, says Dan Kahan, who teaches law and psychology at Yale Law School in New Haven, Connecticut. And the need to keep believing can further distort people’s perceptions and their evaluation of evidence. Here, Kahan tells Nature about the real-world consequences of group affinity and cognitive bias, and about research that may point to remedies. This interview has been edited for length and clarity. One measure is how individualistic or communitarian people are, and how egalitarian or hierarchical. Hierarchical and individualistic people tend to have confidence in markets and industry: those represent human ingenuity and power. People who are egalitarian and communitarian are suspicious of markets and industry. They see them as responsible for social disparity. It’s natural to see things you consider honourable as good for society, and things that are base, as bad. Such associations will motivate people’s assessment of evidence. Can you give an example? In a study, we showed people data from gun-control experiments and varied the results1. People who were high in numeracy always saw when a study supported their view. If it didn’t support their view, they didn’t notice — or argued their way out of it. © 2016 Macmillan Publishers Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 22946 - Posted: 12.03.2016

By Alison Howell What could once only be imagined in science fiction is now increasingly coming to fruition: Drones can be flown by human brains' thoughts. Pharmaceuticals can help soldiers forget traumatic experiences or produce feelings of trust to encourage confession in interrogation. DARPA-funded research is working on everything from implanting brain chips to "neural dust" in an effort to alleviate the effects of traumatic experience in war. Invisible microwave beams produced by military contractors and tested on U.S. prisoners can produce the sensation of burning at a distance. What all these techniques and technologies have in common is that they're recent neuroscientific breakthroughs propelled by military research within a broader context of rapid neuroscientific development, driven by massive government-funded projects in both America and the European Union. Even while much about the brain remains mysterious, this research has contributed to the rapid and startling development of neuroscientific technology. And while we might marvel at these developments, it is also undeniably true that this state of affairs raises significant ethical questions. What is the proper role – if any – of neuroscience in national defense or war efforts? My research addresses these questions in the broader context of looking at how international relations, and specifically warfare, are shaped by scientific and medical expertise and technology. 2016 © U.S. News & World Report L.P.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 22944 - Posted: 12.03.2016

By Melissa Dahl Considering its origin story, it’s not so surprising that hypnosis and serious medical science have often seemed at odds. The man typically credited with creating hypnosis, albeit in a rather primitive form, is Franz Mesmer, a doctor in 18th-century Vienna. (Mesmer, mesmerize. Get it?) Mesmer developed a general theory of disease he called “animal magnetism,” which held that every living thing carries within it an internal magnetic force, in liquid form. Illness arises when this fluid becomes blocked, and can be cured if it can be coaxed to flow again, or so Mesmer’s thinking went. To get that fluid flowing, as science journalist Jo Marchant describes in her recent book, Cure, Mesmer “simply waved his hands to direct it through his patients’ bodies” — the origin of those melodramatic hand motions that stage hypnotists use today.” After developing a substantial following — “mesmerism” became “the height of fashion” in late 1780s Paris, writes Marchant — Mesmer became the subject of what was essentially the world’s first clinical trial. King Louis XVI pulled together a team of the world’s top scientists, including Benjamin Franklin, who tested mesmerism and found its capacity to “cure” was, essentially, a placebo effect. “Not a shred of evidence exists for any fluid,” Franklin wrote. “The practice … is the art of increasing the imagination by degrees.” Maybe so. But that doesn’t mean it doesn’t work. © 2016, New York Media LLC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 22931 - Posted: 11.30.2016

By Yasemin Saplakoglu Even if you don’t have rhythm, your pupils do. In a new study, neuroscientists played drumming patterns from Western music, including beats typical in pop and rock, while asking volunteers to focus on computer screens for an unrelated fast-paced task that involved pressing the space bar as quickly as possible in response to a signal on the screen. Unbeknownst to the participants, the music omitted strong and weak beats at random times. (You can listen below for an example of a music clip they used. If you listen carefully, you can hear bass and hi-hat beats omitted throughout.) Eye scanners tracked the dilations of the subjects’ pupils as the music played. Their pupils enlarged when the rhythms dropped certain beats, even though the participants weren’t paying attention to the music. The biggest dilations matched the omissions of the beats in the most prominent locations in the music, usually the important first beat in a repeated set of notes. The results suggest that we may have an automatic sense of “hierarchical meter”—a pattern of strong and weak beats—that governs our expectations of music, the researchers write in the February 2017 issue of Brain and Cognition. Perhaps, the authors say, our eyes reveal clues into the importance that music and rhythm plays in our lives. © 2016 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22920 - Posted: 11.29.2016

Hannah Devlin The human brain is predisposed to learn negative stereotypes, according to research that offers clues as to how prejudice emerges and spreads through society. The study found that the brain responds more strongly to information about groups who are portrayed unfavourably, adding weight to the view that the negative depiction of ethnic or religious minorities in the media can fuel racial bias. Hugo Spiers, a neuroscientist at University College London, who led the research, said: “The newspapers are filled with ghastly things people do ... You’re getting all these news stories and the negative ones stand out. When you look at Islam, for example, there’s so many more negative stories than positive ones and that will build up over time.” The scientists also uncovered a characteristic brain signature seen when participants were told a member of a “bad” group had done something positive - an observation that is likely to tally with the subjective experience of minorities. “Whenever someone from a really bad group did something nice they were like, ‘Oh, weird,’” said Spiers. Previous studies have identified brain areas involved in gender or racial stereotyping, but this is the first attempt to investigate how the brain learns to link undesirable traits to specific groups and how this is converted into prejudice over time. © 2016 Guardian News and Media Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22821 - Posted: 11.02.2016

Laura Sanders The eyes may reveal whether the brain’s internal stopwatch runs fast or slow. Pupil size predicted whether a monkey would over- or underestimate a second, scientists report in the Nov. 2 Journal of Neuroscience. Scientists knew that pupils get bigger when a person is paying attention. They also knew that paying attention can influence how people perceive the passage of time. Using monkeys, the new study links pupil size and timing directly. “What they’ve done here is connect those dots,” says neuroscientist Thalia Wheatley of Dartmouth College. More generally, the study shows how the eyes are windows into how the brain operates. “There’s so much information coming out of the eyes,” Wheatley says. Neuroscientist Masaki Tanaka of Hokkaido University School of Medicine in Japan and colleagues trained three Japanese macaques to look at a spot on a computer screen after precisely one second had elapsed. The study measured the monkeys’ subjective timing abilities: The monkeys had to rely on themselves to count the milliseconds. Just before each trial, the researchers measured pupil diameters. When the monkeys underestimated a second by looking too soon, their pupil sizes were slightly larger than in trials in which the monkeys overestimated a second, the researchers found. That means that when pupils were large, the monkeys felt time zoom by. But when pupils were small, time felt slower. |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22818 - Posted: 11.02.2016

Nicola Davis The proficiency of elite football referees could be down to their eagle eyes, say researchers. A study of elite and sub-elite referees has found that a greater tendency to predict and watch contact zones between players contributes to the greater accuracy of top-level referees. “Over the years they develop so much experience that they now can anticipate, very well, future events so that they can already direct their attention to those pieces of information where they expect something to happen,” said lead author Werner Helsen from the University of Leuven. Keith Hackett, a former football referee and former general manager of the Professional Game Match Officials Limited, said the research chimed with his own experiences. “In working with elite referees for a number of years I have recognised their ability to see, recognise think and then act in a seamless manner,” he said. “They develop skill sets that enable them to see and this means good game-reading and cognitive skills to be in the right place at the right time.” Mistakes, he believes, often come down to poor visual perception. “Last week, we saw an elite referee fail to detect the violent act of [Moussa] Sissoko using his arm/elbow, putting his opponent’s safety at risk,” he said. “The review panel, having received confirmation from the referee that he failed to see the incident despite looking in the direction of the foul challenge, were able to act.” Writing in the journal Cognitive Research, researchers from the University of Leuven in Belgium and Brunel University in west London say they recruited 39 referees, 20 of whom were elite referees and 19 were experienced but had never refereed at a professional level. © 2016 Guardian News and Media Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22817 - Posted: 11.01.2016