Links for Keyword: Attention

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 388

Forget cellphones; rambunctious friends may be the riskiest driver distraction for teens, according to a new study. Researchers installed video and G-force recorders in the vehicles of 52 newly licensed high school students for 6 months. They found that certain distractions, such as fiddling with the car’s controls and eating, were not strongly related to serious incidents, which included collisions and evasive maneuvers. However, when passengers in the car were engaged in loud conversation, teen drivers were six times more likely to have a serious incident. What’s more, horseplay increased risk by a factor of three whereas cellphone use only doubled it, the team reported online this week in the Journal of Adolescent Health. Forty-three states restrict newly licensed drivers from having more than one other teen in the car, and the study authors say their data suggest that's good policy. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 19514 - Posted: 04.22.2014

Anne Trafton | MIT News Office Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match. A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science. Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper. “The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.” In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19478 - Posted: 04.12.2014

In an op-ed in the Sunday edition of this newspaper, Barbara Ehrenreich, card-carrying liberal rationalist, writes about her own mystical experiences (the subject of her new book), and argues that the numinous deserves more cutting-edge scientific study: I appreciate the spirit (if you will) of this argument, but I am very doubtful as to its application. The trouble is that in its current state, cognitive science has a great deal of difficulty explaining “what happens” when “those wires connect” for non-numinous experience, which is why mysterian views of consciousness remain so potent even among thinkers whose fundamental commitments are atheistic and materialistic. (I’m going to link to the internet’s sharpest far-left scold for a good recent polemic on this front.) That is to say, even in contexts where it’s very easy to identify the physical correlative to a given mental state, and to get the kind of basic repeatability that the scientific method requires — show someone an apple, ask them to describe it; tell them to bite into it, ask them to describe the taste; etc. — there is no kind of scientific or philosophical agreement on what is actually happening to produce the conscious experience of the color “red,” the conscious experience of the crisp McIntosh taste, etc. So if we can’t say how this ”normal” conscious experience works, even when we can easily identify the physical stimulii that produce it, it seems exponentially harder to scientifically investigate the invisible, maybe-they-exist and maybe-they-don’t stimulii — be they divine, alien, or panpsychic — that Ehrenreich hypothesizes might produce more exotic forms of conscious experience. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19468 - Posted: 04.10.2014

If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19460 - Posted: 04.08.2014

By ANA GANTMAN and JAY VAN BAVEL TAKE a close look at your breakfast. Is that Jesus staring out at you from your toast? Such apparitions can be as lucrative as they are seemingly miraculous. In 2004, a Florida woman named Diane Duyser sold a decade-old grilled cheese sandwich that bore a striking resemblance to the Virgin Mary. She got $28,000 for it on eBay. The psychological phenomenon of seeing something significant in an ambiguous stimulus is called pareidolia. Virgin Mary grilled cheese sandwiches and other pareidolia remind us that almost any object is open to multiple interpretations. Less understood, however, is what drives some interpretations over others. In a forthcoming paper in the journal Cognition, we hope to shed some light on that question. In a series of experiments, we examined whether awareness of perceptually ambiguous stimuli was enhanced by the presence of moral content. We quickly flashed strings of letters on a computer screen and asked participants to indicate whether they believed each string formed a word or not. To ensure that the letter strings were perceptually ambiguous, we flashed them between approximately 40 and 70 milliseconds. (When they were presented for too long, people easily saw all the letter strings and demonstrated close to 100 percent accuracy. When they were presented too quickly, people were unable to see the words and performed “at chance,” around 50 percent accuracy.) Some of the strings of letters we flashed were words, others were not. Importantly, some of the words we flashed had moral content (virtue, steal, God) and others did not (virtual, steel, pet). Over the course of three experiments, we found that participants correctly identified strings of letters as words more often when they formed moral words (69 percent accuracy) than when they formed nonmoral words (65 percent accuracy). This suggested that moral content gave a “boost” to perceptually ambiguous stimuli — a shortcut to conscious awareness. We call this phenomenon the “moral pop-out effect.” © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19453 - Posted: 04.07.2014

By BRAYDEN KING and JERRY KIM THIS season Major League Baseball is allowing its officiating crews to use instant replay to review certain critical calls, including home runs, force plays and foul balls. But the calling of the strike zone — determining whether a pitch that is not swung at is a ball or a strike — will still be left completely to the discretion of the officials. This might seem an odd exception, since calling the strike zone may be the type of officiating decision most subject to human foible. In research soon to be published in the journal Management Science, we studied umpires’ strike-zone calls using pitch-location data compiled by the high-speed cameras introduced by Major League Baseball several years ago in an effort to measure, monitor and reward umpires’ accuracy. After analyzing more than 700,000 pitches thrown during the 2008 and 2009 seasons, we found that umpires frequently made errors behind the plate — about 14 percent of non-swinging pitches were called erroneously. Some of those errors occurred in fairly predictable ways. We found, for example, that umpires tended to favor the home team by expanding the strike zone, calling a strike when the pitch was actually a ball 13.3 percent of the time for home team pitchers versus 12.7 percent of the time for visitors. Other errors were more surprising. Contrary to the expectation (or hope) that umpires would be more accurate in important situations, we found that they were, in fact, more likely to make mistakes when the game was on the line. For example, our analyses suggest that umpires were 13 percent more likely to miss an actual strike in the bottom of the ninth inning of a tie game than in the top of the first inning, on the first pitch. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19426 - Posted: 03.31.2014

Andreas von Bubnoff People who are unable to recognize faces can still learn to distinguish between other types of very similar objects, researchers report. The finding provides fresh support for the idea that the brain mechanisms that process face images are specialized for that task. It also offers evidence against an 'expertise' hypothesis, in which the same mechanisms are responsible for recognition of faces and other highly similar objects we have learned to tell apart — the way bird watchers can recognize birds after years of training. Constantin Rezlescu, a psychologist at Harvard University in Cambridge, Massachusetts, and his colleagues worked with two volunteers nicknamed Florence and Herschel, who had acquired prosopagnosia, or face blindness, following brain damage. The condition renders people unable to recognize and distinguish between faces — in some cases, even those of their own family members. The team trained Florence and Herschel to recognize greebles, computer-generated objects that differ from one another in similar ways to faces. The two volunteers spent eight training sessions of up to one hour learning to recognize 20 different greebles. The objects belonged to five different ‘families’ that were easier to distinguish between than were the individual greebles, and initially the participants took longer to discriminate individual greebles within the same family than they did the different families. But by the end of the training, they could tell individual greebles apart just as quickly — a sign that they had become experts in recognizing them, just as dog trainers can recognize individual dogs as easily as different breeds of dog. The study appears in Proceedings of the National Academy of Sciences1. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19408 - Posted: 03.25.2014

|By Nathan Collins A car detects when a driver starts to nod off and gently pulls over. A tablet or laptop senses its user is confused and offers assistance. Such interventions seem futuristic, but in fact they may not require any technological breakthroughs: a recent study suggests that with the aid of a standard camera, a simple computer program can learn to read people's eye movements to determine what they are doing and perhaps how they are feeling. Psychologists at the University of South Carolina were curious if a computer could figure out what a person was up to based on their eye movements. They first had 12 people engage in four tasks, including reading lines of text and searching photographs for a specific printed letter. Each person repeated the tasks 35 to 50 times while a camera recorded how their eyes moved. Using a subset of those data, the team trained a simple computer program, called a naive Bayes classifier, to identify which of the four tasks each person was doing. In the remaining trials, the classifier correctly determined which task the person was working on 75 percent of the time, well above the 25 percent expected by chance. Because the computer program is based on a flexible algorithm that is simple but powerful, this set-up could most likely be used to identify emotions or mental states such as confusion or fatigue, the researchers suggest in the paper, which appeared in September 2013 in PLOS ONE. With only a brief training period, a car's onboard computer—existing models are more than powerful enough—could learn how a driver's gaze changed as he or she became more exhausted. Further work, the authors suggest, could lead to devices capable of identifying and aiding people in need of assistance in a variety of situations. © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 19379 - Posted: 03.19.2014

Sara Reardon Two monkeys sit at computer screens, eyeing one another as they wait for a promised reward: apple juice. Each has a choice — it can either select a symbol that results in juice being shared equally, or pick one that delivers most of the juice to itself. But being selfish is risky. If its partner also chooses not to share, neither gets much juice. This game, the ‘prisoner’s dilemma’, is a classic test of strategy that involves the simultaneous evaluation of an opponent’s thinking. Researchers have now discovered — and manipulated — specific brain circuits in rhesus macaques (Macaca mulatta) that seem to be involved in the animals’ choices, and in their assessments of their partners’ choices. Investigating the connections could shed light on how social context affects decision-making in humans, and how disorders that affect social skills, such as autism spectrum disorder, disrupt brain circuitry. “Once we have identified that there are particular neural signals necessary to drive the processes, we can begin to tinker,” says Michael Platt, a neurobiologist at Duke University in Durham, North Carolina. Neurobiologists Keren Haroush and Ziv Williams of Harvard Medical School in Boston, Massachusetts, zoomed in on neural circuits in rhesus macaques by implanting electrode arrays into a brain area called the dorsal anterior cingulate cortex (dACC), which is associated with rewards and decision-making. The arrays recorded the activity of hundreds of individual neurons. When the monkeys played the prisoner’s dilemma (see ‘A juicy experiment’) against a computer program, they rarely chose to cooperate. But when they played with another monkey that they could see, they were several times more likely to choose to share the juice. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19297 - Posted: 02.26.2014

Want to read someone’s mind? Look at their pupils. A person about to answer “yes” to a question, especially if they are more used to answering “no,” will have more enlarged pupils than someone about to answer “no,” according to a new study. Normally, pupils dilate when a person is in a darkened environment to let more light into the eye and allow better vision. But pupil size can also be altered by levels of signaling chemicals naturally produced by the brain. In the study, published online this week in the Proceedings of the National Academy of Sciences, scientists observed the pupils of 29 people as they pressed a “yes” or “no” button to indicate whether they’d seen a difficult-to-detect visual cue on a screen in front of them. When a person was deciding how to answer—in the seconds before pressing a button—their pupils grew larger. And if a person was normally biased toward answering “no” when they weren’t sure on the visual cue, then the pupil change was even more profound in the decision-making seconds before a “yes” answer. The finding could lead to new ways to detect people’s intrinsic biases and how confident they are in an answer given, important variables in many sociological and psychological studies. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 19166 - Posted: 01.25.2014

By DAN HURLEY Two and a half millenniums ago, a prince named Siddhartha Gautama traveled to Bodh Gaya, India, and began to meditate beneath a tree. Forty-nine days of continuous meditation later, tradition tells us, he became the Buddha — the enlightened one. More recently, a psychologist named Amishi Jha traveled to Hawaii to train United States Marines to use the same technique for shorter sessions to achieve a much different purpose: mental resilience in a war zone. “We found that getting as little as 12 minutes of meditation practice a day helped the Marines to keep their attention and working memory — that is, the added ability to pay attention over time — stable,” said Jha, director of the University of Miami’s Contemplative Neuroscience, Mindfulness Research and Practice Initiative. “If they practiced less than 12 minutes or not at all, they degraded in their functioning.” Jha, whose program has received a $1.7 million, four-year grant from the Department of Defense, described her results at a bastion of scientific conservatism, the New York Academy of Sciences, during a meeting on “The Science of Mindfulness.” Yet mindfulness hasn’t long been part of serious scientific discourse. She first heard another scientist mention the word “meditation” during a lecture in 2005. “I thought, I can’t believe he just used that word in this audience, because it wasn’t something I had ever heard someone utter in a scientific context,” Jha said. Although pioneers like Jon Kabat-Zinn, now emeritus professor at the University of Massachusetts Medical Center, began teaching mindfulness meditation as a means of reducing stress as far back as the 1970s, all but a dozen or so of the nearly 100 randomized clinical trials have been published since 2005. And the most recent studies of mindfulness — the simple, nonjudgmental observation of a person’s breath, body or just about anything else — are taking the practice in directions that might have shocked the Buddha. In addition to military fitness, scientists are now testing brief stints of mindfulness training as a means to improve scores on standardized tests and lay down new connections between brain cells. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19142 - Posted: 01.16.2014

By Emilie Reas “Come on. Get out of the express checkout lane! That’s way more than twelve items, lady.” Without having to count, you can make a good guess at how many purchases the shopper in front of you is making. She may think she’s pulling a fast one, but thanks to the brain’s refined sense for quantity, she’s not fooling anyone. This ability to perceive numerosity – or number of items – does more than help prevent express lane fraud; it also builds the foundation for our arithmetic skills, the economic system and our concept of value. Until recently, it’s remained a puzzle how the brain allows us to so quickly and accurately judge quantity. Neuroscientists believe that neural representations of most high-level cognitive concepts – for example, those involved in memory, language or decision-making – are distributed, in a relatively disorganized manner, throughout the brain. In contrast, highly organized, specialized brain regions have been identified that represent most lower-level sensory information, such as sights, sounds, or physical touch. Such areas resemble maps, in that sensory information is arranged in a logical, systematic spatial layout. Notably, this type of neural topography has only previously been observed for the basic senses, but never for a high-level cognitive function. Researchers from the Netherlands may have discovered an exception to this rule, as reported in their recently published Science paper: a small brain area which represents numerosity along a continuous “map.” Just as we organize numbers along a mental “number line,” with one at the left, increasing in magnitude to the right, so is quantity mapped onto space in the brain. One side of this brain region responds to small numbers, the adjacent region to larger numbers, and so on, with numeric representations increasing to the far end. © 2014 Scientific American,

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19135 - Posted: 01.15.2014

by Helen Thomson DRAW a line across a page, then write on it what you had for dinner yesterday and what you plan to eat tomorrow. If you are a native English speaker, or hail from pretty much any European country, you no doubt wrote last night's meal to the left of tomorrow night's. That's because we construct mental timelines to represent and reason about time, and most people in the West think of the past as on the left, and the future as on the right. Arnaud Saj at the University of Geneva, Switzerland, and his colleagues wondered whether the ability to conjure up a mental timeline is a necessary part of reasoning about events in time. To investigate, they recruited seven Europeans with what's called left hemispatial neglect. That means they have damage to parts of the right side of their brain, limiting their ability to detect, identify and interact with objects in the left-hand side of space. They may eat from only the right side of a plate, shave just the right side of their face, and ignore numbers on the left side of a clock. The team also recruited seven volunteers who had damage to the right side of their brain but didn't have hemispatial neglect, and seven people with undamaged brains. All the volunteers took part in a variety of memory tests. First, they learned about a fictional man called David. They were shown pictures of what David liked to eat 10 years ago, and what he might like to eat in 10 years' time. Participants were then shown drawings of 10 of David's favourite foods, plus four food items they hadn't seen before. Participants had to say whether it was a food that David liked in the past or might like in future. The tests were repeated with items in David's apartment, and his favourite clothes. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19095 - Posted: 01.04.2014

Associated Press A sophisticated, real-world study confirms that dialing, texting or reaching for a cell phone while driving raises the risk of a crash or near-miss, especially for younger drivers. But the research also produced a surprise: Simply talking on the phone did not prove dangerous, as it has in other studies. This one did not distinguish between handheld and hands-free devices - a major weakness. And even though talking doesn't require drivers to take their eyes off the road, it's hard to talk on a phone without first reaching for it or dialing a number - things that raise the risk of a crash, researchers note. Earlier work with simulators, test tracks and cell phone records suggests that risky driving increases when people are on cell phones, especially teens. The 15- to 20-year-old age group accounts for 6 percent of all drivers but 10 percent of traffic deaths and 14 percent of police-reported crashes with injuries. For the new study, researchers at the Virginia Tech Transportation Institute installed video cameras, global positioning systems, lane trackers, gadgets to measure speed and acceleration, and other sensors in the cars of 42 newly licensed drivers 16 or 17 years old, and 109 adults with an average of 20 years behind the wheel. © 2014 Hearst Communications, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19091 - Posted: 01.04.2014

Tomas Jivanda Being pulled into the world of a gripping novel can trigger actual, measurable changes in the brain that linger for at least five days after reading, scientists have said. The new research, carried out at Emory University in the US, found that reading a good book may cause heightened connectivity in the brain and neurological changes that persist in a similar way to muscle memory. The changes were registered in the left temporal cortex, an area of the brain associated with receptivity for language, as well as the the primary sensory motor region of the brain. Neurons of this region have been associated with tricking the mind into thinking it is doing something it is not, a phenomenon known as grounded cognition - for example, just thinking about running, can activate the neurons associated with the physical act of running. “The neural changes that we found associated with physical sensation and movement systems suggest that reading a novel can transport you into the body of the protagonist,” said neuroscientist Professor Gregory Berns, lead author of the study. “We already knew that good stories can put you in someone else’s shoes in a figurative sense. Now we’re seeing that something may also be happening biologically.” 21 students took part in the study, with all participants reading the same book - Pompeii, a 2003 thriller by Robert Harris, which was chosen for its page turning plot. “The story follows a protagonist, who is outside the city of Pompeii and notices steam and strange things happening around the volcano,” said Prof Berns. “It depicts true events in a fictional and dramatic way. It was important to us that the book had a strong narrative line.” © independent.co.uk

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 19080 - Posted: 12.31.2013

Oliver Burkeman As we stumble again into the season of overindulgence – that sacred time of year when wine, carbs and sofas replace brisk walks for all but the most virtuous – a headline in the (excellent) new online science magazine Nautilus catches my eye: "What If Obesity Is Nobody's Fault?" The article describes new research on mice: a genetic alteration, it appears, can make them obese, despite eating no more than others. "Many of us unfortunately have had an attitude towards obese people [as] having a lack of willpower or self-control," one Harvard researcher is quoted as saying. "It's clearly something beyond that." No doubt. But that headline embodies an assumption that's rarely questioned. Suppose, hypothetically, obesity were solely a matter of willpower: laying off the crisps, exercising and generally bucking your ideas up. What makes us so certain that obesity would be the fault of the obese even then? This sounds like the worst kind of bleeding-heart liberalism, a condition from which I probably suffer (I blame my genes). But it's a real philosophical puzzle, with implications reaching far beyond obesity to laziness in all contexts, from politicians' obsession with "hardworking families" to the way people beat themselves up for not following through on their plans. We don't blame people for most physical limitations (if you broke your leg, it wouldn't be a moral failing to cancel your skydiving trip), nor for many other impediments: it's hardly your fault if you're born into educational or economic disadvantage. Yet almost everyone treats laziness and weakness of will as exceptions. If you can't be bothered to try, you've only yourself to blame. It's a rule some apply most harshly to themselves, mounting epic campaigns of self-chastisement for procrastinating, failing to exercise and so on. © 2013 Guardian News and Media Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19034 - Posted: 12.14.2013

By Emilie Reas Did you make it to work on time this morning? Go ahead and thank the traffic gods, but also take a moment to thank your brain. The brain’s impressively accurate internal clock allows us to detect the passage of time, a skill essential for many critical daily functions. Without the ability to track elapsed time, our morning shower could continue indefinitely. Without that nagging feeling to remind us we’ve been driving too long, we might easily miss our exit. But how does the brain generate this finely tuned mental clock? Neuroscientists believe that we have distinct neural systems for processing different types of time, for example, to maintain a circadian rhythm, to control the timing of fine body movements, and for conscious awareness of time passage. Until recently, most neuroscientists believed that this latter type of temporal processing – the kind that alerts you when you’ve lingered over breakfast for too long – is supported by a single brain system. However, emerging research indicates that the model of a single neural clock might be too simplistic. A new study, recently published in the Journal of Neuroscience by neuroscientists at the University of California, Irvine, reveals that the brain may in fact have a second method for sensing elapsed time. What’s more, the authors propose that this second internal clock not only works in parallel with our primary neural clock, but may even compete with it. Past research suggested that a brain region called the striatum lies at the heart of our central inner clock, working with the brain’s surrounding cortex to integrate temporal information. For example, the striatum becomes active when people pay attention to how much time has passed, and individuals with Parkinson’s Disease, a neurodegenerative disorder that disrupts input to the striatum, have trouble telling time. © 2013 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 18978 - Posted: 11.27.2013

Ed Yong A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced. Psychology has been buffeted in recent years by mounting concern over the reliability of its results, after repeated failures to replicate classic studies. A failure to replicate could mean that the original study was flawed, the new experiment was poorly done or the effect under scrutiny varies between settings or groups of people. To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries. The team chose a mix of effects that represent the diversity of psychological science, from classic experiments that have been repeatedly replicated to contemporary ones that have not. Ten of the effects were consistently replicated across different samples. These included classic results from economics Nobel laureate and psychologist Daniel Kahneman at Princeton University in New Jersey, such as gain-versus-loss framing, in which people are more prepared to take risks to avoid losses, rather than make gains1; and anchoring, an effect in which the first piece of information a person receives can introduce bias to later decisions2. The team even showed that anchoring is substantially more powerful than Kahneman’s original study suggested. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 18974 - Posted: 11.26.2013

by Anil Ananthaswamy Can you tickle yourself if you are fooled into thinking that someone else is tickling you? A new experiment says no, challenging a widely accepted theory about how our brains work. It is well known that we can't tickle ourselves. In 2000, Sarah-Jayne Blakemore of University College London (UCL) and colleagues came up with a possible explanation. When we intend to move, the brain sends commands to the muscles, but also predicts the sensory consequences of the impending movement. When the prediction matches the actual sensations that arise, the brain dampens down its response to those sensations. This prevents us from tickling ourselves (NeuroReport, DOI: 10.1097/00001756-200008030-00002). Jakob Hohwy of Monash University in Clayton, Australia, and colleagues decided to do a tickle test while simultaneously subjecting people to a body swap illusion. In this illusion, the volunteer and experimenter sat facing each other. The subject wore goggles that displayed the feed from a head-mounted camera. In some cases the camera was mounted on the subject's head, so that they saw things from their own perspective, while in others it was mounted on the experimenter's head, providing the subject with the experimenter's perspective. Using their right hands, both the subject and the experimenter held on to opposite ends of a wooden rod, which had a piece of foam attached to each end. The subject and experimenter placed their left palms against the foam at their end. Next, the subject or the experimenter took turns to move the rod with their right hand, causing the piece of foam to tickle both of their left palms. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 18954 - Posted: 11.21.2013

By Daisy Grewal How good are you at multi-tasking? The way you answer that question may tell you more than you think. According to recent research, the better people think they are at multitasking, the worse they actually are at it. And the more that you think you are good at it, the more likely you are to multi-task when driving. Maybe the problem of distracted driving has less to do with the widespread use of smartphones and more to do with our inability to recognize our own limits. A study by David Sanbonmatsu and his colleagues looked at the relationship between people’s beliefs about their own multi-tasking ability and their likelihood of using a cell phone when driving. Importantly, the study also measured people’s actual multi-tasking abilities. The researchers found that people who thought they were good at multi-tasking were actually the worst at it. They were also the most likely to report frequently using their cell phones when driving. This may help explain why warning people about the dangers of cell phone use when driving hasn’t done much to curb the behavior. The study is another reminder that we are surprisingly poor judges of our own abilities. Research has found that people overestimate their own qualities in a number of areas including intelligence, physical health, and popularity. Furthermore, the worse we are at something, the more likely we may be to judge ourselves as competent at it. Psychologists David Dunning and Justin Kruger have studied how incompetence, ironically, is often the result of not being able to accurately judge one’s own incompetence. In one study, they found that people who scored the lowest on tests of grammar and logic were the most likely to overestimate their own abilities. The reverse was also true: the more competent people were most likely to underestimate their abilities. And multi-tasking may be just yet another area where incompetence breeds over-confidence. © 2013 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 18880 - Posted: 11.06.2013