Most Recent Links
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By SABRINA TAVERNISE Federal health regulators approved a drug overdose treatment device on Thursday that experts say will provide a powerful lifesaving tool in the midst of a surging epidemic of prescription drug abuse. Similar to an EpiPen used to stop allergic reactions to bee stings, the easy-to-use injector — small enough to tuck into a pocket or a medicine cabinet — can be used by the relatives or friends of people who have overdosed. The hand-held device, called Evzio, delivers a single dose of naloxone, a medication that reverses the effects of an overdose, and will be used on those who have stopped breathing or lost consciousness from an opioid drug overdose. Naloxone is the standard treatment in such circumstances, but until now, has been available mostly in hospitals and other medical settings, when it is often used too late to save the patient. The decision to quickly approve the new treatment, which is expected to be available this summer, comes as deaths from opioids continue to mount, including an increase in those from heroin, which contributed to the death of the actor Philip Seymour Hoffman in February. Federal health officials, facing criticism for failing to slow the rising death toll, are under pressure to act, experts say. “This is a big deal, and I hope gets wide attention,” said Dr. Carl R. Sullivan III, director of the addictions program at West Virginia University. “It’s pretty simple: Having these things in the hands of people around drug addicts just makes sense because you’re going to prevent unnecessary mortality.” The scourge of drug abuse has battered states across the country, with deaths from overdoses now outstripping those from traffic crashes. Prescription drugs alone now account for more than half of all drug overdose deaths, and one major category of them, opioids, or painkillers, take the lives of more Americans than heroin and cocaine combined. Deaths from opioids have quadrupled in 10 years to more than 16,500 in 2010, according to federal data. © 2014 The New York Times Company
Walking backward may seem a simple task, but researchers don’t know how the mind controls this behavior. A study published online today in Science provides the first glimpse of the brain circuit responsible—at least in fruit flies. Geneticists created 3500 strains of the insects, each with a temperature-controlled switch that turned random networks of neurons on when the flies entered an incubator. One mutant batch of fruit flies started strolling in reverse when exposed to warmth (video, right panel), which the team dubbed “moonwalkers,” in honor of Michael Jackson’s famous dance. Two neurons were responsible for the behavior. One lived in the brain and extended its connections to the end of the ventral nerve cord—the fly’s version of a spine, which runs along its belly. The other neuron had the opposite orientation—it started at the bottom of the nerve cord and sent its messaging cables—or axons—into the brain. The neuron in the brain acted like a reverse gear in a car; when turned on, it triggered reverse walking. The researchers say this neuron is possibly a command center that responds to environmental cues, such as, “Hey! I see a wall in front of me.” The second neuron functioned as the brakes for forward motion, but it couldn’t compel the fly to moonwalk. It may serve as a fail-safe that reflexively prevents moving ahead, such as when the fly accidentally steps onto a very cold floor. Using the two neurons as a starting point, the team will trace their links to sensory neurons for touch, sight, and smell, which feed into and control the moonwalking network. No word yet on the neurons responsible for the Macarena. © 2014 American Association for the Advancement of Science
Keyword: Movement Disorders
Link ID: 19445 - Posted: 04.05.2014
By James Gallagher Health and science reporter, BBC News The illegal party drug ketamine is an "exciting" and "dramatic" new treatment for depression, say doctors who have conducted the first trial in the UK. Some patients who have faced incurable depression for decades have had symptoms disappear within hours of taking low doses of the drug. The small trial on 28 people, reported in the Journal of Psychopharmacology, shows the benefits can last months. Experts said the findings opened up a whole new avenue of research. Depression is common and affects one-in-10 people at some point in their lives. Antidepressants, such as prozac, and behavioural therapies help some patients, but a significant proportion remain resistant to any form of treatment. A team at Oxford Health NHS Foundation Trust gave patients doses of ketamine over 40 minutes on up to six occasions. Eight showed improvements in reported levels of depression, with four of them improving so much they were no longer classed as depressed. Some responded within six hours of the first infusion of ketamine. Lead researcher Dr Rupert McShane said: "It really is dramatic for some people, it's the sort of thing really that makes it worth doing psychiatry, it's a really wonderful thing to see. He added: "[The patients] say 'ah this is how I used to think' and the relatives say 'we've got x back'." Dr McShane said this included patients who had lived with depression for 20 years. Stressed man The testing of ketamine has indentified some serious side-effects The duration of the effect is still a problem. Some relapse within days, while others have found they benefit for around three months and have since had additional doses of ketamine. There are also some serious side-effects including one case of the supply of blood to the brain being interrupted. Doctors say people should not try to self-medicate because of the serious risk to health outside of a hospital setting. BBC © 2014
A high-resolution map of the human brain in utero is providing hints about the origins of brain disorders including schizophrenia and autism. The map shows where genes are turned on and off throughout the entire brain at about the midpoint of pregnancy, a time when critical structures are taking shape, researchers Wednesday in the journal Nature. "It's a pretty big leap," says , an investigator at the in Seattle who played a central role in creating the map. "Basically, there was no information of this sort prior to this project." Having a map like this is important because many psychiatric and behavioral problems appear to begin before birth, "even though they may not manifest until teenage years or even the early 20s," says , director of the . The human brain is often called the most complex object in the universe. Yet its basic architecture is created in just nine months, when it grows from a single cell to more than 80 billion cells organized in a way that will eventually let us think and feel and remember. "We're talking about a remarkable process," a process controlled by our genes, Lein says. So he and a large team of researchers decided to use genetic techniques to create a map that would help reveal this process. Funding came from the 2009 federal stimulus package. The massive effort required tens of thousands of brain tissue samples so small that they had to be cut out with a laser. Researchers used brain tissue from aborted fetuses, which the Obama administration has authorized over the objections of abortion opponents. ©2014 NPR
He was known in his many appearances in the scientific literature as simply K.C., an amnesiac who was unable to form new memories. But to the people who knew him, and the scientists who studied him for decades, he was Kent Cochrane, or just Kent. Cochrane, who suffered a traumatic brain injury in a motorcycle accident when he was 30 years old, helped to rewrite the understanding of how the brain forms new memories and whether learning can occur without that capacity. "From a scientific point of view, we've really learned a lot [from him], not just about memory itself but how memory contributes to other abilities," said Shayna Rosenbaum, a cognitive neuropsychologist at York University who started working with Cochrane in 1998 when she was a graduate student. Cochrane was 62 when he died late last week. The exact cause of death is unknown, but his sister, Karen Casswell, said it is believed he had a heart attack or stroke. He died in his room at an assisted living facility where he lived and the family opted not to authorize an autopsy. Few in the general public would know about Cochrane, though some may have seen or read media reports on the man whose life was like that of the lead character of the 2000 movie Memento. But anyone who works on the science of human memory would know K.C. Casswell and her mother, Ruth Cochrane, said the family was proud of the contribution Kent Cochrane made to science. Casswell noted her eldest daughter was in a psychology class at university when the professor started to lecture about the man the scientific literature knows as K.C. © CBC 2014
Keyword: Learning & Memory
Link ID: 19442 - Posted: 04.03.2014
Dr Nicola Davis The electronic nose in an instrument that attempts to mimic the human olfactory system. Humans and animals don't identify specific chemicals within odours; what they do is to recognise a smell based on a response pattern. You, as a human, will smell a strawberry and say "that's a strawberry". If you gave this to a traditional analytical piece of equipment, it might tell you what the 60-odd chemicals in the odour were - but that wouldn't tell you that it was a strawberry. How does it work? A traditional electronic nose has an array of chemical sensors, designed either to detect gases or vapours. These sensors are not tuned to a single chemical, but detect families of chemicals - [for example] alcohols. Each one of these sensors is different, so when they are presented to a complex odour formed of many chemicals, each sensor responds differently to that odour. This creates a pattern of sensor responses, which the machine can be taught [to recognise]. Can't we just use dogs? A dog is very, very sensitive. Special research teams work on training dogs to detect cancers as you would do explosives. What you we are trying to do with the electronic nose is create an artificial means of replicating what the dog does. Such machines have the advantage that they don't get tired, will work all day and you only need to feed them electricity. © 2014 Guardian News and Media Limited
For years, some biomedical researchers have worried that a push for more bench-to-bedside studies has meant less support for basic research. Now, the chief of one of the National Institutes of Health’s (NIH’s) largest institutes has added her voice—and hard data—to the discussion. Story Landis describes what she calls a “sharp decrease” in basic research at her institute, a trend she finds worrisome. In a blog post last week, Landis, director of the $1.6 billion National Institute of Neurological Disorders and Stroke (NINDS), says her staff started out asking why, in the mid-2000s, NINDS funding declined for R01s, the investigator-initiated grants that are the mainstay of most labs. After examining the aims and abstracts of grants funded between 1997 and 2012, her staff found that the portion of NINDS competing grant funding that went to basic research has declined (from 87% to 71%) while applied research rose (from 13% to 29%). To dig deeper, the staffers divided the grants into four categories—basic/basic; basic/disease-focused; applied/translational; and applied/clinical. Here, the decline in basic/basic research was “striking”: It fell from 52% to 27% of new and competing grants, while basic/disease-focused has been rising (see graph). The same trend emerged when the analysts looked only at investigator-initiated grants, which are proposals based on a researcher’s own ideas, not a solicitation by NINDS for proposals in a specific area. The shift could reflect changes in science and “a natural progression of the field,” Landis writes. Or it could mean researchers “falsely believe” that NINDS is not interested in basic studies and they have a better shot at being funded if they propose disease-focused or applied studies. The tight NIH budget and new programs focused on translational research could be fostering this belief, she writes. When her staff compared applications submitted in 2008 and 2011, they found support for a shift to disease-focused proposals: There was a “striking” 21% decrease in the amount of funding requested for basic studies, even though those grants had a better chance of being funded. © 2014 American Association for the Advancement of Science.
Keyword: Movement Disorders
Link ID: 19440 - Posted: 04.02.2014
Erika Check Hayden Monkeys on a reduced-calorie diet live longer than those that can eat as much as they want, a new study suggests. The findings add to a thread of studies on how a restricted diet prolongs life in a range of species, but they complicate the debate over whether the research applies to animals closely related to humans. In the study, which has been running since 1989 at the Wisconsin National Primate Research Center in Madison, 38 rhesus macaques (Macaca mulatta) that were allowed to eat whatever they wanted were nearly twice as likely to die at any age than were 38 monkeys whose calorie intakes were cut by 30%1. The same study reported2 in 2009 that calorie-restricted monkeys were less likely to die of age-related causes than control monkeys, but had similar overall mortality rates at all ages. “We set out to test the hypothesis: would calorie restriction delay ageing? And I think we've shown that it does,” says Rozalyn Anderson, a biochemist at the University of Wisconsin who led the study, which is published today in Nature Communications. She said it is not surprising that the 2009 paper did not find that the calorie-restricted monkeys lived longer, because at the time too few monkeys had died to prove the point. Eating a very low-calorie diet has been shown3 to prolong the lives of mice, leading to speculation that such a diet triggers a biochemical pathway that promotes survival. But what that pathway might be — and whether humans have it — has been a matter of hot debate. Eat to live In 2012, a study at the US National Institute on Aging (NIA) in Bethesda, Maryland, cast doubt on the idea, reporting4 that monkeys on low-calorie diets did not live longer than those that ate more food. But Anderson says that the Wisconsin findings are good news. © 2014 Nature Publishing Group
Link ID: 19439 - Posted: 04.02.2014
Neandertals and modern Europeans had something in common: They were fatheads of the same ilk. A new genetic analysis reveals that our brawny cousins had a number of distinct genes involved in the buildup of certain types of fat in their brains and other tissues—a trait shared by today’s Europeans, but not Asians. Because two-thirds of our brains are built of fatty acids, or lipids, the differences in fat composition between Europeans and Asians might have functional consequences, perhaps in helping them adapt to colder climates or causing metabolic diseases. “This is the first time we have seen differences in lipid concentrations between populations,” says evolutionary biologist Philipp Khaitovich of the CAS-MPG Partner Institute for Computational Biology in Shanghai, China, and the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, lead author of the new study. “How our brains are built differently of lipids might be due to Neandertal DNA.” Ever since researchers at the Max Planck sequenced the genome of Neandertals, including a super high-quality genome of a Neandertal from the Altai Mountains of Siberia in December, researchers have been comparing Neandertal DNA with that of living people. Neandertals, who went extinct 30,000 years ago, interbred with modern humans at least once in the past 60,000 years, probably somewhere in the Middle East. Because the interbreeding happened after moderns left Africa, today’s Africans did not inherit any Neandertal DNA. But living Europeans and Asians have inherited a small amount—1% to 4% on average. So far, scientists have found that different populations of living humans have inherited the Neandertal version of genes that cause diabetes, lupus, and Crohn’s disease; alter immune function; and affect the function of the protein keratin in skin, nails, and hair. © 2014 American Association for the Advancement of Science.
By NATALIE ANGIER The “Iliad” may be a giant of Western literature, yet its plot hinges on a human impulse normally thought petty: spite. Achilles holds a festering grudge against Agamemnon (“He cheated me, wronged me ... He can go to hell...”) turning down gifts, homage, even the return of his stolen consort Briseis just to prolong the king’s suffering. Now, after decades of focusing on such staples of bad behavior as aggressiveness, selfishness, narcissism and greed, scientists have turned their attention to the subtler and often unsettling theme of spite — the urge to punish, hurt, humiliate or harass another, even when one gains no obvious benefit and may well pay a cost. Psychologists are exploring spitefulness in its customary role as a negative trait, a lapse that should be embarrassing but is often sublimated as righteousness, as when you take your own sour time pulling out of a parking space because you notice another car is waiting for it and you’ll show that vulture who’s boss here, even though you’re wasting your own time, too. Evolutionary theorists, by contrast, are studying what might be viewed as the brighter side of spite, and the role it may have played in the origin of admirable traits like a cooperative spirit and a sense of fair play. The new research on spite transcends older notions that we are savage, selfish brutes at heart, as well as more recent suggestions that humans are inherently affiliative creatures yearning to love and connect. Instead, it concludes that vice and virtue, like the two sides of a V, may be inextricably linked. “Spitefulness is such an intrinsically interesting subject, and it fits with so many people’s everyday experience, that I was surprised to see how little mention there was of it in the psychology literature,” said David K. Marcus, a psychologist at Washington State University. At the same time, he said, “I was thrilled to find something that people haven’t researched to exhaustion.” © 2014 The New York Times Company
A new study has raised new questions about how MRI scanners work in the quest to understand the brain. The research, led by Professor Brian Trecox and a team of international researchers, used a brand new technique to assess fluctuations in the performance of brain scanners as they were being used during a series of basic experiments. The results are due to appear in the Journal of Knowledge in Neuroscience: General later today. “Most people think that we know a lot about how MRI scanners actually work. The truth is, we don’t,” says Trecox. “We’ve even been misleading the public about the name – we made up functional Magnetic Resonance Imaging in 1983 because it sounded scientific and technical. fMRI really stands for flashy, Magically Rendered Images. So we thought: why not put an MRI scanner in an MRI scanner, and figure out what’s going on inside?” To do this, Trecox and his team built a giant imaging machine – thought to be the world’s largest – using funds from a Kickstarter campaign and a local bake sale. They then took a series of scans of standard-sized MRI scanners while they were repeatedly switched on and off, in one of the largest and most robust neuroscience studies of its type. “We tested six different MRI scanners,” says Eric Salmon, a PhD student involved in the project. “We found activation in an area called insular cortex in four of the six machines when they were switched on,” he added. In humans, the insular cortex has previously been implicated in a wide range of functions, including consciousness and self-awareness. According to Trecox and his team, activation in this area has never been found in imaging machines before. While Salmon acknowledged that the results should be treated with caution – research assistants were found asleep in at least two of the machines – the results nevertheless provide a potentially huge step in our understanding of the tools we use to research the brain. © 2014 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 19435 - Posted: 04.01.2014
By Karen Kaplan There are lies, damn lies – and the lies that we tell for the sake of others when we are under the influence of oxytocin. Researchers found that after a squirt of the so-called love hormone, volunteers lied more readily about their results in a game in order to benefit their team. Compared with control subjects who were given a placebo, those on oxytocin told more extreme lies and told them with less hesitation, according to a study published Monday in Proceedings of the National Academy of Sciences. Oxytocin is a brain hormone that is probably best known for its role in helping mothers bond with their newborns. In recent years, scientists have been examining its role in monogamy and in strengthening trust and empathy in social groups. Sometimes, doing what’s good for the group requires lying. (Think of parents who fake their addresses to get their kids into a better school.) A pair of researchers from Ben-Gurion University of the Negev in Israel and the University of Amsterdam figured that oxytocin would play a role in this type of behavior, so they set up a series of experiments to test their hypothesis. The researchers designed a simple computer game that asked players to predict whether a virtual coin toss would wind up heads or tails. After seeing the outcome on a computer screen, players were asked to report whether their prediction was correct or not. In some cases, making the right prediction would earn a player’s team a small payment (the equivalent of about 40 cents). In other cases, a correct prediction would cost the team the same amount, and sometimes there was no payoff or cost. Los Angeles Times Copyright 2014
by Bob Holmes People instinctively organise a new language according to a logical hierarchy, not simply by learning which words go together, as computer translation programs do. The finding may add further support to the notion that humans possess a "universal grammar", or innate capacity for language. The existence of a universal grammar has been in hot dispute among linguists ever since Noam Chomsky first proposed the idea half a century ago. If the theory is correct, this innate structure should leave some trace in the way people learn languages. To test the idea, Jennifer Culbertson, a linguist at George Mason University in Fairfax, Virginia, and her colleague David Adger of Queen Mary University of London, constructed an artificial "nanolanguage". They presented English-speaking volunteers with two-word phrases, such as "shoes blue" and "shoes two", which were supposed to belong to a new language somewhat like English. They then asked the volunteers to choose whether "shoes two blue" or "shoes blue two" would be the correct three-word phrase. In making this choice, the volunteers – who hadn't been exposed to any three-word phrases – would reveal their innate bias in language-learning. Would they rely on familiarity ("two" usually precedes "blue" in English), or would they follow a semantic hierarchy and put "blue" next to "shoe" (because it modifies the noun more tightly than "two", which merely counts how many)? © Copyright Reed Business Information Ltd.
|By Hal Arkowitz and Scott O. Lilienfeld A commercial sponsored by Pfizer, the drug company that manufactures the antidepressant Zoloft, asserts, “While the cause [of depression] is unknown, depression may be related to an imbalance of natural chemicals between nerve cells in the brain. Prescription Zoloft works to correct this imbalance.” Using advertisements such as this one, pharmaceutical companies have widely promoted the idea that depression results from a chemical imbalance in the brain. The general idea is that a deficiency of certain neurotransmitters (chemical messengers) at synapses, or tiny gaps, between neurons interferes with the transmission of nerve impulses, causing or contributing to depression. One of these neurotransmitters, serotonin, has attracted the most attention, but many others, including norepinephrine and dopamine, have also been granted supporting roles in the story. Much of the general public seems to have accepted the chemical imbalance hypothesis uncritically. For example, in a 2007 survey of 262 undergraduates, psychologist Christopher M. France of Cleveland State University and his colleagues found that 84.7 percent of participants found it “likely” that chemical imbalances cause depression. In reality, however, depression cannot be boiled down to an excess or deficit of any particular chemical or even a suite of chemicals. “Chemical imbalance is sort of last-century thinking. It's much more complicated than that,” neuroscientist Joseph Coyle of Harvard Medical School was quoted as saying in a blog by National Public Radio's Alix Spiegel. © 2014 Scientific American
Link ID: 19432 - Posted: 04.01.2014
by Aviva Rutkin Don't blame baby for trying to eat that Lego piece. Humans may have a brain circuit dedicated to grabbing stuff and putting it in our mouths, and it probably develops in the womb. Researchers and parents alike have long known that babies stick all manner of things in their mouths from very early on. Some fetuses even suck their thumbs. As putting something in the mouth seems advanced compared to the other, limited actions of newborns, Angela Sirigu of the Institute of Cognitive Sciences in Bron, France, and colleagues wondered whether the behaviour is encoded in the brain from birth. To investigate, they studied 26 people of different ages while they were undergoing brain surgery. The researchers found that they were able to make nine of the unconscious patients bring their hands up and open their mouths, just by stimulating a part of the brain we know is linked to those actions in non-human primates. Brain pudding Because this behaviour is encoded in the same region as in other primates, it may be there from birth or earlier, the researchers say. If it was learned, you would expect it to involve multiple brain areas, and those could vary between individuals. Newborn kangaroos are able to climb into their mother's pouch and baby wildebeests can run away from lions, but our babies appear helpless and have to learn most complex actions. The new work suggests that the way our brain develops is more like what happens in other animals than previously thought. © Copyright Reed Business Information Ltd.
Keyword: Development of the Brain
Link ID: 19431 - Posted: 04.01.2014
by Meghan Rosen Human faces just got a lot more emotional. People can broadcast more than three times as many different feelings on their faces as scientists once suspected. For years, scientists have thought that people could convey only happiness, surprise, sadness, anger, fear and disgust. “I thought it was very odd to have only one positive emotion,” says cognitive scientist Aleix Martinez of Ohio State University in Columbus. So he and colleagues came up with 16 combined ones, such as “happily disgusted” and “happily surprised.” Then the researchers asked volunteers to imagine situations that would provoke these emotions, such as listening to a gross joke, or getting unexpected good news. When the team compared pictures of the volunteers making different faces and analyzed every eyebrow wrinkle, mouth stretch and tightened chin, “what we found was beyond belief,” Martinez says. For each compound emotion, almost everyone used the same facial muscles, the team reports March 31 in the Proceedings of the National Academy of Sciences. Martinez’s team’s findings could one day help computer engineers improve facial recognition software and help scientists better understand emotion-perception disorders such as schizophrenia. Citations S Du, Y. Tao and A. M. Martinez Compound facial expressions of emotion. Proceedings of the National Academy of Sciences. Published online March 30, 2014. Doi: 10.1073/pnas.1322355111. © Society for Science & the Public 2000 - 2013
Link ID: 19430 - Posted: 04.01.2014
By SAM WANG A STUDY published last week found that the brains of autistic children show abnormalities that are likely to have arisen before birth, which is consistent with a large body of previous evidence. Yet most media coverage focuses on vaccines, which do not cause autism and are given after birth. How can we help people separate real risks from false rumors? Over the last few years, we’ve seen an explosion of studies linking autism to a wide variety of genetic and environmental factors. Putting these studies in perspective is an enormous challenge. In a database search of more than 34,000 scientific publications mentioning autism since its first description in 1943, over half have come since 2008. As a statistically minded neuroscientist, I suggest a different approach that relies on a concept we are familiar with: relative odds. As a single common measuring stick to compare odds, I have chosen the “risk ratio,” a measure that allows the bigger picture to come into focus. For a variety of studies I asked the same question: How large is the increased risk for autism? My standard for comparison was the likelihood in the general population of autism spectrum disorder. Here’s an example. Start from the fact that the recorded rate of autism is now 1 in 68, according to a report released last week by the Centers for Disease Control and Prevention. If babies born in purple farmhouses have a rate of autism of 2 in 68, this doubling means that the purple farmhouse carries a risk ratio of 2. However, correlation is not causation, and there is no need to repaint that farmhouse just yet. © 2014 The New York Times Company
Link ID: 19429 - Posted: 03.31.2014
by Catherine de Lange Why wait for the doctor to see you? A smart patch attached to your skin could diagnose health problems automatically – and even administer drugs. Monitoring movement disorders such as Parkinson's disease or epilepsy relies on video recordings of symptoms and personal surveys, says Dae-Hyeong Kim at the Seoul National University in South Korea. And although using wearable devices to monitor the vital signs of patients is theoretically possible, the wearable pads, straps and wrist bands that can do this are often cumbersome and inflexible. To track the progression of symptoms and the response to medication more accurately would require devices that monitor cues from the body, store recorded data for pattern analysis and deliver therapeutic agents through the human skin in a controlled way, Kim says. So Kim and his team have developed an adhesive patch that is flexible and can be worn on the wrist like a second skin. The patch is 1 millimetre thick and made of a hydrocolloid dressing – a type of thin flexible bandage. Into it they embedded a layer of silicon nanoparticles. These silicon nanomembranes are often used for flexible electronics, and can pick up the bend and stretch of human skin and convert these into small electronic signals. The signals are stored as data in separate memory cells made from layers of gold nanoparticles. The device could be used to detect and treat tremors in people who have Parkinson's disease, or epileptic seizures, says Kim. If these movements are detected, small heaters in the patch trigger the release of drugs from silicon nanoparticles. The patch also contains temperature sensors to make sure the skin doesn't burn during the process. © Copyright Reed Business Information Ltd.
By ABIGAIL ZUGER, M.D. One legend says it all began when a North African herder saw his goats eat some wild berries, then frolic with unusual verve. Another story cites a few small leaves blown off a nearby bush into the Chinese emperor’s mug of hot water. Either way, whether caffeine entered the life of man by coffee bean or tea leaf, happiness ensued. Happiness, that is, for all but the poor souls charged with saving us from our drugs, for no regulatory challenge trumps the one posed by caffeine, molecule of elegant enjoyment and increasing abuse, man’s best friend and occasional killer. As Murray Carpenter makes clear in his methodical review, our society’s metrics are no match for this substance’s nuances, whether among athletes, teenagers, experimental subjects or the average dependent Joe. (Read an excerpt of “Caffeinated.”) Pure caffeine is a bitter white powder. In the body it blocks the effects of the molecule adenosine, a crucial brake on many physiologic processes. With just enough caffeine in the system, the body’s organs become a little more themselves: the brain a little brainier, the muscles a little springier, the blood vessels a little tighter, the digestion a little more efficient. With too much caffeine, all can accelerate into cardiac arrest. It takes only about 30 milligrams of caffeine (less than a cup of coffee or can of cola) for stimulative effects to be noticeable. A hundred milligrams a day will hook most people: They feel immensely unhappy without their daily fix, and the organs all whine in protest for a few days. It takes more than 10 grams to kill you — a dose impossible to achieve with traditional beverages alone. However, the new caffeine-rich energy shots make it alarmingly easy for party-minded people to achieve the zone between enough and much too much. © 2014 The New York Times Company
Keyword: Drug Abuse
Link ID: 19427 - Posted: 03.31.2014
By BRAYDEN KING and JERRY KIM THIS season Major League Baseball is allowing its officiating crews to use instant replay to review certain critical calls, including home runs, force plays and foul balls. But the calling of the strike zone — determining whether a pitch that is not swung at is a ball or a strike — will still be left completely to the discretion of the officials. This might seem an odd exception, since calling the strike zone may be the type of officiating decision most subject to human foible. In research soon to be published in the journal Management Science, we studied umpires’ strike-zone calls using pitch-location data compiled by the high-speed cameras introduced by Major League Baseball several years ago in an effort to measure, monitor and reward umpires’ accuracy. After analyzing more than 700,000 pitches thrown during the 2008 and 2009 seasons, we found that umpires frequently made errors behind the plate — about 14 percent of non-swinging pitches were called erroneously. Some of those errors occurred in fairly predictable ways. We found, for example, that umpires tended to favor the home team by expanding the strike zone, calling a strike when the pitch was actually a ball 13.3 percent of the time for home team pitchers versus 12.7 percent of the time for visitors. Other errors were more surprising. Contrary to the expectation (or hope) that umpires would be more accurate in important situations, we found that they were, in fact, more likely to make mistakes when the game was on the line. For example, our analyses suggest that umpires were 13 percent more likely to miss an actual strike in the bottom of the ninth inning of a tie game than in the top of the first inning, on the first pitch. © 2014 The New York Times Company
Link ID: 19426 - Posted: 03.31.2014