Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Gary Stix James DiCarlo: We all have this intuitive feel for what object recognition is. It’s the ability to discriminate your face from other faces, a car from other cars, a dog from a camel, that ability we all intuitively feel. But making progress in understanding how our brains are able to accomplish that is a very challenging problem and part of the reason is that it’s challenging to define what it isn’t and is. We take this problem for granted because it seems effortless to us. However, a computer vision person would tell you is that this is an extremely challenging problem because each object presents an essentially infinite number of images to your retina so you essentially never see the same image of each object twice. SA: It seems like object recognition is actually one of the big problems both in neuroscience and in the computational science of machine learning? DiCarlo: That’s right., not only machine learning but also in psychology or cognitive science because the objects that we see are the sources in the world of what we use to build higher cognition, things like memory and decision-making. Should I reach for this, should I avoid it? Our brains can’t do what you would call higher cognition without these foundational elements that we often take for granted. SA: Maybe you can talk about what’s actually happening in the brain during this process. DiCarlo: It’s been known for several decades that there’s a portion of the brain, the temporal lobe down the sides of our head, that, when lost or damaged in humans and non-human primates, leads to deficits of recognition. So we had clues that that’s where these algorithms for object recognition are living. But just saying that part of your brain solves the problem is not really specific. It’s still a very large piece of tissue. Anatomy tells us that there’s a whole network of areas that exist there, and now the tools of neurophysiology and still more advanced tools allow us to go in and look more closely at the neural activity, especially in non-human primates. We can then begin to decipher the actual computations to the level that an engineer might, for instance, in order to emulate what’s going on in our heads. © 2014 Scientific American
by Tania Lombrozo Science doesn't just further technology and help us predict and control our environment. It also changes the way we understand ourselves and our place in the natural world. This understanding can and a sense of . But it can also be , especially when it calls into question our basic assumptions about the kinds of creatures we are and the universe we inhabit. Current developments in neuroscience seem to be triggering precisely this jumble of reactions: wonder alongside disquiet, hope alongside alarm. A recent at Salon.com, for example, promises an explanation for "how neuroscience could save addicts from relapse," while an by Nathan Greenslit at The Atlantic, published less than a week later, raises worries that neuroscience is being used to reinforce racist drug policy. Obama's hails "," but with it comes the need to rapidly work out the of what we're learning about the brain and about ourselves. We're ; but we're not always sure what to make of it. In at the journal Psychological Science, psychologists Azim Shariff, Joshua Greene and six of their colleagues bring these heady issues down to earth by considering whether learning about neuroscience can influence judgments in a real-world situation: deciding how someone who commits a crime should be punished. The motivating intuition is this: to hold someone responsible for her actions, she must have acted with free will. ©2014 NPR
Link ID: 19737 - Posted: 06.17.2014
By MARIA KONNIKOVA THE absurdity of having had to ask for an extension to write this article isn’t lost on me: It is, after all, a piece on time and poverty, or, rather, time poverty — about what happens when we find ourselves working against the clock to finish something. In the case of someone who isn’t otherwise poor, poverty of time is an unpleasant inconvenience. But for someone whose lack of time is just one of many pressing concerns, the effects compound quickly. We make a mistake when we look at poverty as simply a question of financial constraint. Take what happened with my request for an extension. It was granted, and the immediate time pressure was relieved. But even though I met the new deadline (barely), I’m still struggling to dig myself out from the rest of the work that accumulated in the meantime. New deadlines that are about to whoosh by, a growing list of ignored errands, a rent check and insurance payment that I just realized I haven’t mailed. And no sign of that promised light at the end of the tunnel. My experience is the time equivalent of a high-interest loan cycle, except instead of money, I borrow time. But this kind of borrowing comes with an interest rate of its own: By focusing on one immediate deadline, I neglect not only future deadlines but the mundane tasks of daily life that would normally take up next to no time or mental energy. It’s the same type of problem poor people encounter every day, multiple times: The demands of the moment override the demands of the future, making that future harder to reach. When we think of poverty, we tend to think about money in isolation: How much does she earn? Is that above or below the poverty line? But the financial part of the equation may not be the single most important factor. “The biggest mistake we make about scarcity,” Sendhil Mullainathan, an economist at Harvard who is a co-author of the book “Scarcity: Why Having Too Little Means So Much,” tells me, “is we view it as a physical phenomenon. It’s not.” © 2014 The New York Times Company
By Brian Palmer Maureen Dowd, a 62-year-old Pulitzer Prize–winning columnist for the New York Times, had a bad marijuana trip earlier this year. As part of her research into the legalization of recreational cannabis in Colorado, she ate a few too many bites of a pot-infused candy bar, entered a “hallucinatory state,” and spent eight paranoid hours curled up on her hotel room bed. Dowd used the experience as a jumping-off point to discuss the risks of overdosing on edible marijuana, which has become a major issue in pot-friendly states. It’s also possible, however, that Dowd just doesn’t handle cannabis very well. While pot mellows most people out, everyone has heard of someone who barricaded himself or herself in a dorm room after a few bongs hits in college. (Or maybe that someone is you.) Why do people react so differently to the same drug? The question itself may be something of a fallacy. Cannabis is not a single drug—it contains dozens of compounds, and they appear to have varying, and sometimes opposing, effects on the brain. Tetrahydrocannabinol, or THC, and cannabidiol, or CBD, have been the subject of some intriguing research. In 2010, researchers showed that pretreating people with a dose of CBD can protect against the less pleasant effects of THC, such as paranoia. In a similar 2012 study, participants took pills that contained only one of the two chemicals, rather than the combination that you receive in cannabis. The subjects who took THC pills were more likely to suffer paranoia and delusion than those who took CBD. The researchers went one step further to investigate which specific cognitive effects of THC are likely to lead to paranoia and other symptoms of psychosis. After taking either THC or CBD, participants watched a series of arrows appear on a screen and responded by indicating which direction the arrows were pointing. Most of the arrows pointed directly left or right, but occasionally a tilted arrow appeared. (Researchers called the tilted arrows “oddballs.”) Subjects who took the CBD had a heightened brain activity response to the oddballs. That’s the way a nondrugged person typically reacts—repetitions of the same stimulus don’t interest us, but a sudden change grabs our attention. The THC-takers had an abnormal response: They found the left and right arrows, which constituted the overwhelming majority of the images, more noteworthy than the oddballs. © 2014 The Slate Group LLC
—By Indre Viskontas and Chris Mooney We've all been mesmerized by them—those beautiful brain scan images that make us feel like we're on the cutting edge of scientifically decoding how we think. But as soon as one neuroscience study purports to show which brain region lights up when we are enjoying Coca-Cola, or looking at cute puppies, or thinking we have souls, some other expert claims that "it's just a correlation," and you wonder whether researchers will ever get it right. Sam Kean But there's another approach to understanding how our minds work. In his new book, The Tale of the Dueling Neurosurgeons, Sam Kean tells the story of a handful of patients whose unique brains—rendered that way by surgical procedures, rare diseases, and unfortunate, freak accidents—taught us much more than any set of colorful scans. Kean recounts some of their unforgettable stories on the latest episode of the Inquiring Minds podcast. "As I was reading these [case studies] I said, 'That's baloney! There's no way that can possibly be true,'" Kean remembers, referring to one particularly surprising case in which a woman's brain injury left her unable to recognize and distinguish between different kinds of animals. "But then I looked into it, and I realized that, not only is it true, it actually reveals some important things about how the brain works." Here are five patients, from Kean's book, whose stories transformed neuroscience: 1. The man who could not imagine the future: Kent Cochrane (KC), pictured below, was a '70s wild child, playing in a rock band, getting into bar fights, and zooming around Toronto on his motorcycle. But in 1981, a motorcycle accident left him without two critical brain structures. Both of his hippocampi, the parts of the brain that allow us to form new long-term memories for facts and events in our lives, were lost. That's quite different from other amnesiacs, whose damage is either restricted to only one brain hemisphere, or includes large portions of regions outside of the hippocampus. Copyright ©2014 Mother Jones
Claudia M. Gold Tom Insel, director of the National Institute of Mental Health (NIMH,) in his recent blog post "Are Children Overmedicated?" seems to suggest that perhaps more psychiatric medication is in order. Comparing mental illness in children to food allergies, he dismisses the "usual" explanations given for the increase medication prescribing patterns. In his view, these explanations are: Blaming psychiatrists who are too busy to provide therapy, parents who are too busy to provide a stable home environment, drug companies for marketing their products, and schools for lack of recess. By concluding that perhaps the explanation for the increase in prescribing of psychiatric medication to children is a greater number of children with serious psychiatric illness, Insel shows a lack of recognition of the complexity of the situation. When a recent New York Times article, that Insel makes reference to, reported on the rise in prescribing of psychiatric medication for toddlers diagnosed with ADHD, with a disproportionate number coming from families of poverty, one clinician remarked that if this is an attempt to medicate social and economic issues, then we have a huge problem. He was on to something. In conversations with pediatricians (the main prescribers of these medications) and child psychiatrists on the front lines, I find many in a reactive stance. When people feel overwhelmed, they go into survival mode, with their immediate aim just to get through the day. They find themselves prescribing medication because they have no other options.
Jane J. Lee Could've, should've, would've. Everyone has made the wrong choice at some point in life and suffered regret because of it. Now a new study shows we're not alone in our reaction to incorrect decisions. Rats too can feel regret. Regret is thinking about what you should have done, says David Redish, a neuroscientist at the University of Minnesota in Minneapolis. It differs from disappointment, which you feel when you don't get what you expected. And it affects how you make decisions in the future. (See "Hand Washing Wipes Away Regrets?") If you really want to study emotions or feelings like regret, says Redish, you can't just ask people how they feel. So when psychologists and economists study regret, they look for behavioral and neural manifestations of it. Using rats is one way to get down into the feeling's neural mechanics. Redish and colleague Adam Steiner, also at the University of Minneapolis, found that rats expressed regret through both their behavior and their neural activity. Those signals, researchers report today in the journal Nature Neuroscience, were specific to situations the researchers set up to induce regret, which led to specific neural patterns in the brain and in behavior. When Redish and Steiner looked for neural activity, they focused on two areas known in people—and in some animals—to be involved in decision-making and the evaluation of expected outcomes: the orbitofrontal cortex and the ventral striatum. Brain scans have revealed that people with a damaged orbitofrontal cortex, for instance, don't express regret. To record nerve-cell activity, the researchers implanted electrodes in the brains of four rats—a typical sample size in this kind of experiment—then trained them to run a "choice" maze. © 1996-2014 National Geographic Society
By JOHN COATES SIX years after the financial meltdown there is once again talk about market bubbles. Are stocks succumbing to exuberance? Is real estate? We thought we had exorcised these demons. It is therefore with something close to despair that we ask: What is it about risk taking that so eludes our understanding, and our control? Part of the problem is that we tend to view financial risk taking as a purely intellectual activity. But this view is incomplete. Risk is more than an intellectual puzzle — it is a profoundly physical experience, and it involves your body. Risk by its very nature threatens to hurt you, so when confronted by it your body and brain, under the influence of the stress response, unite as a single functioning unit. This occurs in athletes and soldiers, and it occurs as well in traders and people investing from home. The state of your body predicts your appetite for financial risk just as it predicts an athlete’s performance. If we understand how a person’s body influences risk taking, we can learn how to better manage risk takers. We can also recognize that mistakes governments have made have contributed to excessive risk taking. Consider the most important risk manager of them all — the Federal Reserve. Over the past 20 years, the Fed has pioneered a new technique of influencing Wall Street. Where before the Fed shrouded its activities in secrecy, it now informs the street in as clear terms as possible of what it intends to do with short-term interest rates, and when. Janet L. Yellen, the chairwoman of the Fed, declared this new transparency, called forward guidance, a revolution; Ben S. Bernanke, her predecessor, claimed it reduced uncertainty and calmed the markets. But does it really calm the markets? Or has eliminating uncertainty in policy spread complacency among the financial community and actually helped inflate market bubbles? We get a fascinating answer to these questions if we turn from economics and look into the biology of risk taking. © 2014 The New York Times Company
By Meeri Kim, Many of us find ourselves swimming along in the tranquil sea of life when suddenly a crisis hits — a death in the family, the loss of a job, a bad breakup. Some power through and find calm waters again, while others drown in depression. Scientists continue to search for the underlying genes and neurobiology that dictate our reactions to stress. Now, a study using mice has found a switch-like mechanism between resilience and defeat in an area of the brain that plays an important role in regulating emotions and has been linked with mood and anxiety disorders. (Bo Li/Cold Spring Harbor Laboratory) - Researchers at Cold Spring Harbor Laboratory identify the neurons in the brain that determine if a mouse will learn to cope with stress or become depressed. These neurons, located in a region of the brain known as the medial prefrontal cortex (top, green image) become hyperactive in depressed mice. The bottom panel is close-up of above image - yellow indicates activation. The team showed that this enhanced activity causes depression. After artificially enhancing the activity of neurons in that part of the brain — the medial prefrontal cortex — mice that previously fought to avoid electric shocks started to act helpless. Rather than leaping for an open escape route, they sat in a corner taking the pain — presumably out of a belief that nothing they could do would change their circumstances. “This helpless behavior is quite similar to what clinicians see in depressed individuals — an inability to take action to avoid or correct a difficult situation,” said study author and neuroscientist Bo Li of the Cold Spring Harbor Laboratory in New York. The results were published online May 27 in the Journal of Neuroscience. © 1996-2014 The Washington Post
Link ID: 19704 - Posted: 06.06.2014
Neil Levy Can human beings still be held responsible in the age of neuroscience? Some people say no: they say once we understand how the brain processes information and thereby causes behaviour, there’s nothing left over for the person to do. This argument has not impressed philosophers, who say there doesn’t need to be anything left for the person to do in order to be responsible. People are not anything over and above the causal systems involved in information processing, we are our brains (plus some other, equally physical stuff). We are responsible if our information processing systems are suitably attuned to reasons, most philosophers think. There are big philosophical debates concerning what it takes to be suitably attuned to reasons, and whether this is really enough for responsibility. But I want to set those debates aside here. It’s more interesting to ask what we can learn from neuroscience about the nature of responsibility and about when we’re responsible. Even if neuroscience doesn’t tell us that no one is ever responsible, it might be able to tell us if particular people are responsible for particular actions. A worthy case study Consider a case like this: early one morning in 1987, a Canadian man named Ken Parks got up from the sofa where he had fallen asleep and drove to his parents’-in-law house. There he stabbed them both before driving to the police station, where he told police he thought he had killed someone. He had: his mother-in-law died from her injuries. © 2010–2014, The Conversation Trust (UK)
By Sadie Dingfelder Want to become famous in the field of neuroscience? You could go the usual route, spending decades collecting advanced degrees, slaving away in science labs and publishing your results. Or you could simply fall victim to a freak accident. The stars of local science writer Sam Kean’s new book, “The Tale of the Dueling Neurosurgeons,” (which he’ll discuss Saturday at Politics and Prose) took the latter route. Be it challenging the wrong guy to a joust, spinning out on a motorcycle, or suffering from a stroke, these folks sustained brain injuries with bizarre and fascinating results. One man, for instance, lost the ability to identify different kinds of animals but had no trouble naming plants and objects. Another man lost his short-term memory. The result? A diary filled with entries like: “I am awake for the very first time.” “Now, I’m really awake.” “Now, I’m really, completely awake.” Unfortunate mishaps like these have advanced our understanding of how the gelatinous gray mass that (usually) stays hidden inside our skulls gives rise to thoughts, feelings and ideas, Kean says. “Traditionally, every major discovery in the history of neuroscience came about this way,” he says. “We had no other way of looking at the brain for centuries and centuries, because we didn’t have things like MRI machines.” Rather than covering the case studies textbook-style, Kean provides all the gory details. Consider Phineas Gage. You may remember from Psych 101 that Gage, a railroad worker, survived having a metal rod launched through his skull. You might not know, however, that one doctor “shaved Gage’s scalp and peeled off the dried blood and gelatinous brains. He then extracted skull fragments from the wound by sticking his fingers in from both ends, Chinese-finger-trap-style,” as Kean writes in his new book. © 1996-2014 The Washington Post
By Denali Tietjen Meditation has long been known for its mental health benefits, but new research shows that just a few minutes of mindfulness can improve physical health and personal life as well. A recent study conducted by researchers at INSEAD and The Wharton School found that 15 minutes of mindful meditation can help you make better decisions. The research, published in the Association for Psychological Science’s journal Psychological Science, comes from four studies (varying in sample size from 69 to 178 adults) in which participants responded to sunk-cost scenarios at different degrees of mindful awareness. The results consistently showed that increased mindfulness decreases the sunk-cost bias. WOAH, hold the phone. What’s a sunk cost and what’s a sunk-cost bias?? Sunk cost is an economics term that psychologists have adopted. In economics, sunk costs are defined as non-recoverable investment costs like the cost of employee training or a lease on office space. In psychology, sunk costs are basically the same thing: The time and energy we put into our personal lives. Though we might not sit down with a calculator at the kitchen table when deciding who to take as our plus one to our second cousin’s wedding next weekend, we do a cost-benefit analysis every time we make a decision. And we take these sunk costs into account. The sunk-cost bias, then, is the tendency to allow sunk costs to overly influence current decisions. Mindfulness meditation can provide improved clarity, which helps you stay present and make better decisions, the study says. This protects you from that manipulative sunk-cost bias.
Link ID: 19693 - Posted: 06.05.2014
By Matthew R. Francis Possibly no subject in science has inspired more nonsense than quantum mechanics. Sure, it’s a complicated field of study, with a few truly mysterious facets that are not settled to everyone’s satisfaction after nearly a century of work. At the same time, though, using quantum to mean “we just don’t know” is ridiculous—and simply wrong. Quantum mechanics is the basis for pretty much all our modern technology, from smartphones to fluorescent lights, digital cameras to fiber-optic communications. If I had to pick a runner-up in the nonsense sweepstakes, it would be human consciousness, another subject with a lot of mysterious aspects. We are made of ordinary matter yet are self-aware, capable of abstractly thinking about ourselves and of recognizing others (including nonhumans) as separate entities with their own needs. As a physicist, I’m fascinated by the notion that our consciousness can imagine realities other than our own: The universe is one way, but we are perfectly happy to think of how it might be otherwise. I hold degrees in physics and have spent a lot of time learning and teaching quantum mechanics. Nonphysicists seem to have the impression that quantum physics is really esoteric, with those who study it spending their time debating the nature of reality. In truth, most of a quantum mechanics class is lots and lots of math, in the service of using a particle’s quantum state—the bundle of physical properties such as position, energy, spin, and the like—to describe the outcomes of experiments. Sure, there’s some weird stuff and it’s fun to talk about, but quantum mechanics is aimed at being practical (ideally, at least). © 2014 The Slate Group LLC.
Link ID: 19671 - Posted: 05.31.2014
By DOUGLAS QUENQUA It’s easy to think of fruit flies as tiny robots that simply respond reflexively to their environment. But just like humans, they take time to collect information and to deliberate when faced with a difficult choice, according to a new study. The findings, published in the journal Science, could help researchers study cognitive development and defects in humans. Scientists have long been fascinated by decision-making, said an author of the study, Dr. Gero Miesenböck, a neuroscientist at the University of Oxford. “Going back to the 19th century, psychologists have measured how long it takes humans to make up their minds,” he said. “Usually if you give people a hard perceptual choice, they take longer, because the brain needs to integrate information until it has enough to make a decision. “This is the first time in an animal as low as a fruit fly we have been able to show that similar processes occur.” To study how flies make up their minds, Oxford researchers placed the animals in bifurcated chambers filled on both sides with an odor they had been taught to avoid. When the odor was clearly more potent on one side of the chamber than the other, the flies were quick to choose which chamber to inhabit (and nearly always chose the less odorous one). But when the difference between chambers was subtle, the flies took longer to make a decision, and were more apt to make the wrong choice. “We were surprised,” Dr. Miesenböck said. “The original thought was that the flies would just act impulsively, they won’t take time to deliberate. We found that’s not true.” The process so closely mimics decision-making in humans, the researchers said, that the same mathematical models used to describe the actions of deliberating people can be used to predict a fly’s behavior. © 2014 The New York Times Company
Link ID: 19653 - Posted: 05.23.2014
Kevin Loria Music was among the least of Mr. B's concerns. As a 59-year-old Dutch man living with extremely severe obsessive compulsive disorder for 46 years, he had other things on his mind. His OCD was so severe it led to moderate anxiety and mild depression. Not only was his condition extreme, but it was also resistant to traditional treatment. It got so bad that he opted to receive an implant to stimulate his brain constantly with electricity — a treatment, called deep brain stimulation (DBS), that has been shown to successfully treat OCD in the past. It worked, but had a very peculiar side effect. As researchers write in a study published in the journal Frontiers in Behavioral Neuroscience, it turned Mr. B. into a Johnny Cash fanatic, though he'd never really listened to The Man in Black before. Mr. B. had listened to the same music for decades, but was never a devout music lover. He was a Rolling Stones and Beatles fan (with a preference for the Stones), and listened to Dutch music as well. But just months after flying to Minneapolis and having two sets of electrodes tunneled into his brain for the shock therapy, he had a mind-blowing run-in with the song "Ring of Fire" playing on the radio. Something about Cash's deep bass-baritone voice resonated with him at that moment. His life had already changed. After the surgical implants and therapy, his OCD had gone from extremely severe to mild, and his depression and anxiety were at a level lower than mild. But when he heard Cash croon, another change began. Mr. B. bought all the Johnny Cash music he could find and stopped listening to anything else — no more Beatles, no more Stones, no more Nederpop. Instead, he played Cash all the time, and especially loved the songs from the '70s and '80s. "Folsom Prison Blues," "Ring Of Fire," and "Sunday Morning Come-Down" are his favorites. They make him feel like a hero, he told doctors. © 2014 Business Insider, Inc.
After a string of scandals involving accusations of misconduct and retracted papers, social psychology is engaged in intense self-examination—and the process is turning out to be painful. This week, a global network of nearly 100 researchers unveiled the results of an effort to replicate 27 well-known studies in the field. In more than half of the cases, the result was a partial or complete failure. As the replicators see it, the failed do-overs are a healthy corrective. “Replication helps us make sure what we think is true really is true,” says Brent Donnellan, a psychologist at Michigan State University in East Lansing who has undertaken three recent replications of studies from other groups—all of which came out negative. “We are moving forward as a science,” he says. But rather than a renaissance, some researchers on the receiving end of this organized replication effort see an inquisition. “I feel like a criminal suspect who has no right to a defense and there is no way to win,” says psychologist Simone Schnall of the University of Cambridge in the United Kingdom, who studies embodied cognition, the idea that the mind is unconsciously shaped by bodily movement and the surrounding environment. Schnall’s 2008 study finding that hand-washing reduced the severity of moral judgment was one of those Donnellan could not replicate. About half of the replications are the work of Many Labs, a network of about 50 psychologists around the world. The results of their first 13 replications, released online in November, were greeted with a collective sigh of relief: Only two failed. Meanwhile, Many Labs participant Brian Nosek, a psychologist at the University of Virginia in Charlottesville, put out a call for proposals for more replication studies. After 40 rolled in, he and Daniël Lakens, a psychologist at Eindhoven University of Technology in the Netherlands, chose another 14 to repeat. © 2014 American Association for the Advancement of Science.
|By Isaac Bédard Very few animals have revealed an ability to consciously think about the future—behaviors such as storing food for the winter are often viewed as a function of instinct. Now a team of anthropologists at the University of Zurich has evidence that wild orangutans have the capacity to perceive the future, prepare for it and communicate those future plans to other orangutans. The researchers observed 15 dominant male orangutans in Sumatra for several years. These males roam through immense swaths of dense jungle, emitting loud yells every couple of hours so that the females they mate with and protect can locate and follow them. The shouts also warn away any lesser males that might be in the vicinity. These vocalizations had been observed by primatologists before, but the new data reveal that the apes' last daily call, an especially long howl, is aimed in the direction they will travel in the morning—and the other apes take note. The females stop moving when they hear this special 80-second call, bed down for the night, and in the morning begin traveling in the direction indicated the evening before. The scientists believe that the dominant apes are planning their route in advance and communicating it to other orangutans in the area. They acknowledge, however, that the dominant males might not intend their long calls to have such an effect on their followers. Karin Isler, a Zurich anthropologist who co-authored the study in PLOS ONE last fall, explains, “We don't know whether the apes are conscious. This planning does not have to be conscious. But it is also more and more difficult to argue that they [do not have] some sort of mind of their own.” © 2014 Scientific American
By BENEDICT CAREY SAN DIEGO – The last match of the tournament had all the elements of a classic showdown, pitting style versus stealth, quickness versus deliberation, and the world’s foremost card virtuoso against its premier numbers wizard. If not quite Ali-Frazier or Williams-Sharapova, the duel was all the audience of about 100 could ask for. They had come to the first Extreme Memory Tournament, or XMT, to see a fast-paced, digitally enhanced memory contest, and that’s what they got. The contest, an unusual collaboration between industry and academic scientists, featured one-minute matches between 16 world-class “memory athletes” from all over the world as they met in a World Cup-like elimination format. The grand prize was $20,000; the potential scientific payoff was large, too. One of the tournament’s sponsors, the company Dart NeuroScience, is working to develop drugs for improved cognition. The other, Washington University in St. Louis, sent a research team with a battery of cognitive tests to determine what, if anything, sets memory athletes apart. Previous research was sparse and inconclusive. Yet as the two finalists, both Germans, prepared to face off — Simon Reinhard, 35, a lawyer who holds the world record in card memorization (a deck in 21.19 seconds), and Johannes Mallow, 32, a teacher with the record for memorizing digits (501 in five minutes) — the Washington group had one preliminary finding that wasn’t obvious. “We found that one of the biggest differences between memory athletes and the rest of us,” said Henry L. Roediger III, the psychologist who led the research team, “is in a cognitive ability that’s not a direct measure of memory at all but of attention.” People have been performing feats of memory for ages, scrolling out pi to hundreds of digits, or phenomenally long verses, or word pairs. Most store the studied material in a so-called memory palace, associating the numbers, words or cards with specific images they have already memorized; then they mentally place the associated pairs in a familiar location, like the rooms of a childhood home or the stops on a subway line. The Greek poet Simonides of Ceos is credited with first describing the method, in the fifth century B.C., and it has been vividly described in popular books, most recently “Moonwalking With Einstein,” by Joshua Foer. © 2014 The New York Times Company
By NATASHA SINGER Joseph J. Atick cased the floor of the Ronald Reagan Building and International Trade Center in Washington as if he owned the place. In a way, he did. He was one of the organizers of the event, a conference and trade show for the biometrics security industry. Perhaps more to the point, a number of the wares on display, like an airport face-scanning checkpoint, could trace their lineage to his work. A physicist, Dr. Atick is one of the pioneer entrepreneurs of modern face recognition. Having helped advance the fundamental face-matching technology in the 1990s, he went into business and promoted the systems to government agencies looking to identify criminals or prevent identity fraud. “We saved lives,” he said during the conference in mid-March. “We have solved crimes.” Thanks in part to his boosterism, the global business of biometrics — using people’s unique physiological characteristics, like their fingerprint ridges and facial features, to learn or confirm their identity — is booming. It generated an estimated $7.2 billion in 2012, according to reports by Frost & Sullivan. Making his rounds at the trade show, Dr. Atick, a short, trim man with an indeterminate Mediterranean accent, warmly greeted industry representatives at their exhibition booths. Once he was safely out of earshot, however, he worried aloud about what he was seeing. What were those companies’ policies for retaining and reusing consumers’ facial data? Could they identify individuals without their explicit consent? Were they running face-matching queries for government agencies on the side? Now an industry consultant, Dr. Atick finds himself in a delicate position. While promoting and profiting from an industry that he helped foster, he also feels compelled to caution against its unfettered proliferation. He isn’t so much concerned about government agencies that use face recognition openly for specific purposes — for example, the many state motor vehicle departments that scan drivers’ faces as a way to prevent license duplications and fraud. Rather, what troubles him is the potential exploitation of face recognition to identify ordinary and unwitting citizens as they go about their lives in public. Online, we are all tracked. But to Dr. Atick, the street remains a haven, and he frets that he may have abetted a technology that could upend the social order. © 2014 The New York Times Company
Link ID: 19630 - Posted: 05.18.2014
By ALAN SCHWARZ ATLANTA — More than 10,000 American toddlers 2 or 3 years old are being medicated for attention deficit hyperactivity disorder outside established pediatric guidelines, according to data presented on Friday by an official at the Centers for Disease Control and Prevention. The report, which found that toddlers covered by Medicaid are particularly prone to be put on medication such as Ritalin and Adderall, is among the first efforts to gauge the diagnosis of A.D.H.D. in children below age 4. Doctors at the Georgia Mental Health Forum at the Carter Center in Atlanta, where the data was presented, as well as several outside experts strongly criticized the use of medication in so many children that young. The American Academy of Pediatrics standard practice guidelines for A.D.H.D. do not even address the diagnosis in children 3 and younger — let alone the use of such stimulant medications, because their safety and effectiveness have barely been explored in that age group. “It’s absolutely shocking, and it shouldn’t be happening,” said Anita Zervigon-Hakes, a children’s mental health consultant to the Carter Center. “People are just feeling around in the dark. We obviously don’t have our act together for little children.” Dr. Lawrence H. Diller, a behavioral pediatrician in Walnut Creek, Calif., said in a telephone interview: “People prescribing to 2-year-olds are just winging it. It is outside the standard of care, and they should be subject to malpractice if something goes wrong with a kid.” Friday’s report was the latest to raise concerns about A.D.H.D. diagnoses and medications for American children beyond what many experts consider medically justified. Last year, a nationwide C.D.C. survey found that 11 percent of children ages 4 to 17 have received a diagnosis of the disorder, and that about one in five boys will get one during childhood. A vast majority are put on medications such as methylphenidate (commonly known as Ritalin) or amphetamines like Adderall, which often calm a child’s hyperactivity and impulsivity but also carry risks for growth suppression, insomnia and hallucinations. Only Adderall is approved by the Food and Drug Administration for children below age 6. However, because off-label use of methylphenidate in preschool children had produced some encouraging results, the most recent American Academy of Pediatrics guidelines authorized it in 4- and 5-year-olds — but only after formal training for parents and teachers to improve the child’s environment were unsuccessful. © 2014 The New York Times Company