Most Recent Links

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 21778

By Stephen L. Macknik Every few decades there’s a major new neuroscience discovery that changes everything. I’m not talking about your garden variety discovery. Those happen frequently (this is the golden age of neuroscience after all). But no, what I’m talking about are the holy-moly, scales-falling-from-your-eyes, time-to-rewrite-the-textbooks, game-changing discoveries. Well one was reported in this last month—simultaneously by two separate labs—and it redefines the primary organizational principle of the visual system in the cortex of the brain. This may sound technical, but it concerns how we see light and dark, and the perception of contrast. Since all sensation functions at the pleasure of contrast, these new discoveries impact neuroscience and psychology as a whole. I’ll explain below. The old way of thinking about how the wiring of the visual cortex was organized orbited around the concept of visual-edge orientation. David Hubel (my old mentor) and Torsten Wiesel (my current fellow Brooklynite)—who shared the Nobel Prize in Physiology or Medicine in 1981—arguably made the first major breakthrough concerning how information was organized in the cortex versus earlier stages of visual processing. Before their discovery, the retina (and the whole visual system) was thought to be a kind of neural camera that communicated its image into the brain. The optic nerves connect the eyes’ retinas to the thalamus at the center of the brain—and then the thalamus connects to the visual cortex at the back of the brain through a neural information superhighway called the optic radiations. Scientists knew, even way back then, that neurons at a given point of the visual scene lie physically next to the neuron that sees the neighboring piece of the visual scene. The discovery of this so called retinotopic map in the primary visual cortex (by Talbot and Marshall) was of course important, but because it matched the retinotopic mapping of the retina and thalamus, it didn’t constitute a new way of thinking. It wasn’t a game-changing discovery. © 2016 Scientific American

Keyword: Vision
Link ID: 22301 - Posted: 06.09.2016

By BENEDICT CAREY Jerome S. Bruner, whose theories about perception, child development and learning informed education policy for generations and helped launch the modern study of creative problem solving, known as the cognitive revolution, died on Sunday at his home in Manhattan. He was 100. His death was confirmed by his partner, Eleanor M. Fox. Dr. Bruner was a researcher at Harvard in the 1940s when he became impatient with behaviorism, then a widely held theory, which viewed learning in terms of stimulus and response: the chime of a bell before mealtime and salivation, in Ivan Pavlov’s famous dog experiments. Dr. Bruner believed that behaviorism, rooted in animal experiments, ignored many dimensions of human mental experience. In one 1947 experiment, he found that children from low-income households perceived a coin to be larger than it actually was — their desires apparently shaping not only their thinking but also the physical dimensions of what they saw. In subsequent work, he argued that the mind is not a passive learner — not a stimulus-response machine — but an active one, bringing a full complement of motives, instincts and intentions to shape comprehension, as well as perception. His writings — in particular the book “A Study of Thinking” (1956), written with Jacqueline J. Goodnow and George A. Austin — inspired a generation of psychologists and helped break the hold of behaviorism on the field. To build a more complete theory, he and the experimentalist George A. Miller, a Harvard colleague, founded the Center for Cognitive Studies, which supported investigation into the inner workings of human thought. Much later, this shift in focus from behavior to information processing came to be known as the cognitive revolution. © 2016 The New York Times Company

Keyword: Development of the Brain
Link ID: 22300 - Posted: 06.09.2016

By Rachel Feltman Archerfish are already stars of the animal kingdom for their stunning spit-takes. They shoot high-powered water jets from their mouths to stun prey, making them one of just a few fish species known to use tools. But by training Toxotes chatareus to direct those jets of spit at certain individuals, scientists have shown that the little guys have another impressive skill: They seem to be able to distinguish one human face from another, something never before witnessed in fish and spotted just a few times in non-human animals. The results, published Tuesday in the Nature journal Scientific Reports, could help us understand how humans got so good at telling each other apart. Or how most people got to be good at that, anyway. I'm terrible at it. It's generally accepted that the fusiform gyrus, a brain structure located in the neocortex, allows humans to tell one another apart with a speed and accuracy that other species can't manage. But there's some debate over whether human faces are so innately complex — and that distinguishing them is more difficult than other tricks of memory or pattern recognition — that this region of the brain is a necessary facilitator of the skill that evolved especially for it. Birds, which have been shown to distinguish humans from one another, have the same structure. But some researchers still think that facial recognition might be something that humans learn — it's not an innate skill — and that the fusiform gyrus is just the spot where we happen to process all the necessary information.

Keyword: Attention; Evolution
Link ID: 22299 - Posted: 06.08.2016

Jean Fain When Sandra Aamodt talks about dieting, people listen ... or, they stick their fingers in their ears and go la, la, la. Aamodt's neuroscientific take on why diets backfire is that divisive. Aamodt is a neuroscientist, book author and former editor of a leading brain research journal. She also has become a prominent evangelist of the message that traditional diets just don't work and often leave the dieter worse off than before. And she's an enthusiastic proponent of mindful eating. "I define it as eating with attention and joy, without judgment," Aamodt said in an interview. "That includes attention to hunger and fullness, to the experience of eating and to its effects on our bodies." Even if you've never heard of her, you likely will soon. Her new book, Why Diets Make Us Fat, is bound to change the weight-loss conversation, if not dismantle Biggest Loser-sized dreams. I am a therapist specializing in eating issues, and among my clients, Aamodt has already shifted the focus from weight loss to self-care. Most clients are reluctant to accept her central argument: That our body weight tends to settle at "set points" — that 10- to 15-pound range the brain maintains despite repeated efforts to lower it. However, once they see how the set-point theory reflects their dieting experience, they realize that although they don't have the final say on their weight (their brain does), they do have real influence — through exercise and other health-affirming activities — over their health and well-being. © 2016 npr

Keyword: Obesity
Link ID: 22298 - Posted: 06.08.2016

By Anahad O'Connor The federal government’s decision to update food labels last month marked a sea change for consumers: For the first time, beginning in 2018, nutrition labels will be required to list a breakdown of both the total sugars and the added sugars in packaged foods. But is sugar really that bad for you? And is the sugar added to foods really more harmful than the sugars found naturally in foods? We spoke with some top scientists who study sugar and its effects on metabolic health to help answer some common questions about sugar. Here’s what they had to say. Why are food labels being revised? The shift came after years of urging by many nutrition experts, who say that excess sugar is a primary cause of obesity and heart disease, the leading killer of Americans. Many in the food industry opposed the emphasis on added sugars, arguing that the focus should be on calories rather than sugar. They say that highlighting added sugar on labels is unscientific, and that the sugar that occurs naturally in foods like fruits and vegetables is essentially no different than the sugar commonly added to packaged foods. But scientists say it is not that simple. So, is added sugar different from the naturally occurring sugar in food? It depends. Most sugars are essentially combinations of two molecules, glucose and fructose, in different ratios. The sugar in a fresh apple, for instance, is generally the same as the table sugar that might be added to homemade apple pie. Both are known technically as sucrose, and they are broken down in the intestine into glucose and fructose. Glucose can be metabolized by any cell in the body. But fructose is handled almost exclusively by the liver. “Once you get to that point, the liver doesn’t know whether it came from fruit or not,” said Kimber Stanhope, a researcher at the University of California, Davis, who studies the effects of sugar on health. © 2016 The New York Times Company

Keyword: Obesity; Chemical Senses (Smell & Taste)
Link ID: 22297 - Posted: 06.08.2016

By Virginia Morell Sex is never simple—even among lizards. Unlike mammals, the sex of central bearded dragons, large lizards found in eastern Australia, is determined by their chromosomes and the environment. If the eggs are incubated in high temperatures, male embryos turn into females. Such sex-reversed lizards still retain the chromosomal makeup of a male, but they develop into functional superfemales, whose output of eggs exceeds that of the regular females. Now, a new study predicts that—in some cases—these superfemales may be able to drive regular ones to extinction. That’s because superfemales not only produce more eggs, but they’re also exceptionally bold. Looking at the shape, physiology, and behavior of 20 sex-reversed females, 55 males, and 40 regular females, scientists found that the sex-reversed dragons were physically similar to regular males: They had a male dragon’s long tail and high body temperature. They were also behaviorally similar, acting like bold, active males—even as they produced viable eggs. Indeed, the scientists report in the current issue of the Proceedings of the Royal Society B that these sex-reversed females were behaviorally more malelike than the genetic males. Because of these advantages, this third sex could reproductively outcompete normal females, the scientists say, possibly causing some populations to lose the female sex chromosome. (Females are the heterogametic sex, like human males.) In such a population, the dragons’ sex would then be determined solely by temperature instead of genetics—something that’s occurred in the lab within a single generation. Could it happen in the wild? The scientists are still investigating. © 2016 American Association for the Advancement of Science

Keyword: Sexual Behavior; Evolution
Link ID: 22296 - Posted: 06.08.2016

By JAMES GORMAN This summer’s science horror blockbuster is a remake: Return of the Leaping Electric Eel! If you have any kind of phobia of slimy, snakelike creatures that can rise from the water and use their bodies like Tasers, this story — and the accompanying video — may not be for you. The original tale (there was, alas, no video) dates to 1800 when the great explorer Alexander von Humboldt was in South America and enlisted local fishermen to catch some of these eels for the new (at the time) study of electricity. He wrote that the men herded horses and mules into a shallow pond and let the eels attack by pressing themselves against the horses. The horses and mules tried to escape, but the fishermen kept them in the water until the eels used up their power. Two horses died, probably from falling and drowning. Or so Humboldt said. Though the story was widely retold, no other report of this kind of fishing-with-horses phenomenon surfaced for more than 200 years, according to Kenneth Catania, a scientist with a passion for studying the eel species in question, electrophorus electricus. In 2014, he reported on how the eels freeze their prey. They use rapid pulses of more than 600 volts generated by modified muscle cells and sent through the water. These volleys of shocks cause the muscles of prey to tense at once, stopping all movement. The eels’ bodies function like Tasers, Dr. Catania wrote. But they can also project high-voltage pulses in the water in isolated couplets rather than full volleys for a different effect. The pairs of shocks don’t freeze the prey, but cause their bodies to twitch. That movement reveals the prey’s location, and then the eels send out a rapid volley to immobilize then swallow it. Dr. Catania noticed another kind of behavior, however. He was using a metal-handled net — wearing rubber gloves — while working with eels in an aquarium, and the eels would fling themselves up the handle of the net, pressing themselves to the metal and generating rapid electric shocks. © 2016 The New York Times Company

Keyword: Aggression; Muscles
Link ID: 22295 - Posted: 06.07.2016

By Sarah DeWeerdt, Spectrum Brains from people with autism show patterns of gene expression similar to those from people with schizophrenia, according to a new analysis. The findings, published May 24 in Translational Psychiatry, deepen the connections between the two conditions, says study leader Dan Arking, associate professor of genetic medicine at Johns Hopkins University in Baltimore, Maryland. People who have either autism or schizophrenia share features such as language problems and difficulty understanding other people’s thoughts and feelings. They also have genetic risk factors in common. “And now I think we can show that they share overlap in gene expression,” Arking says. The study builds on previous work, in which Arking’s team characterized gene expression in postmortem brain tissue from 32 individuals with autism and 40 controls. In the new analysis, the researchers made use of that dataset as well as one from the Stanley Medical Research Institute that looked at 31 people with schizophrenia, 25 with bipolar disorder and 26 controls3. They found 106 genes expressed at lower levels in autism and schizophrenia brains than in controls. These genes are involved in the development of neurons, especially the formation of the long projections that carry nerve signals and the development of the junctions, or synapses, between one cell and the next. The results are consistent with those from previous studies indicating a role for genes involved in brain development in both conditions. “On the one hand, it’s exciting because it tells us that there’s a lot of overlap,” says Jeremy Willsey, assistant professor of psychiatry at the University of California, San Francisco, who was not involved in the work. “On the other hand, these are fairly general things that are overlapping.” © 2016 Scientific American

Keyword: Autism; Schizophrenia
Link ID: 22294 - Posted: 06.07.2016

By Sandra G. Boodman Richard McGhee and his family believed the worst was behind them. McGhee, a retired case officer at the Defense Intelligence Agency who lives near Annapolis, had spent six months battling leukemia as part of a clinical trial at MD Anderson Cancer Center in Houston. The experimental chemotherapy regimen he was given had worked spectacularly, driving his blood cancer into a complete remission. But less than nine months after his treatment ended, McGhee abruptly fell apart. He became moody, confused and delusional — even childish — a jarring contrast with the even-keeled, highly competent person he had been. He developed tremors in his arms, had trouble walking and became incontinent. “I was really a mess,” he recalled. Doctors suspected he had developed a rapidly progressive and fatal dementia, possibly a particularly aggressive form of Alzheimer’s disease. If that was the case, his family was told, his life span would be measured in months. Luckily, the cause of McGhee’s precipitous decline proved to be much more treatable — and prosaic — than doctors initially feared. “It’s really a pleasure to see somebody get better so rapidly,” said Michael A. Williams, a professor of neurology and neurosurgery at the University of Washington School of Medicine in Seattle. Until recently, Williams was affiliated with Baltimore’s Sinai Hospital, where he treated McGhee in 2010. “This was a diagnosis waiting to be found.”

Keyword: Alzheimers; Neuroimmunology
Link ID: 22293 - Posted: 06.07.2016

By Clare Wilson We’ve all been there: after a tough mental slog your brain feels as knackered as your body does after a hard workout. Now we may have pinpointed one of the brain regions worn out by a mentally taxing day – and it seems to also affect our willpower, so perhaps we should avoid making important decisions when mentally fatigued. Several previous studies have suggested that our willpower is a finite resource, and if it gets depleted in one way – like finishing a difficult task – we find it harder to make other good choices, like resisting a slice of cake. In a small trial, Bastien Blain at INSERM in Paris and his colleagues asked volunteers to spend six hours doing tricky memory tasks, while periodically choosing either a small sum of cash now, or a larger amount after a delay. .. As the day progressed, people became more likely to act on impulse and to pick an immediate reward. This didn’t happen in the groups that spent time doing easier memory tasks, reading or gaming. For those engaged in difficult work, fMRI brain scans showed a decrease in activity in the middle frontal gyrus, a brain area involved in decision-making. “That suggests this region is becoming less excitable, which could be impairing people’s ability to resist temptation,” says Blain. It’s involved in decisions like ‘Shall I have a beer with my friends tonight, or shall I save money to buy a bike next month,’ he says. Previous research has shown that children with more willpower in a similar type of choice test involving marshmallows end up as more successful adults, by some measures. “Better impulse control predicts your eventual wealth and health,” says Blain. The idea that willpower can be depleted is contentious as some researchers have failed to replicate others’ findings. © Copyright Reed Business Information Ltd.

Keyword: Attention; Learning & Memory
Link ID: 22292 - Posted: 06.07.2016

By Jordana Cepelewicz Colors exist on a seamless spectrum, yet we assign hues to discrete categories such as “red” and “orange.” Past studies have found that a person's native language can influence the way colors are categorized and even perceived. In Russian, for example, light blue and dark blue are named as different colors, and studies find that Russian speakers can more readily distinguish between the shades. Yet scientists have wondered about the extent of such verbal influence. Are color categories purely a construct of language, or is there a physiological basis for the distinction between green and blue? A new study in infants suggests that even before acquiring language, our brain already sorts colors into the familiar groups. A team of researchers in Japan tracked neural activity in 12 prelinguistic infants as they looked at a series of geometric figures. When the shapes' color switched between green and blue, activity increased in the occipitotemporal region of the brain, an area known to process visual stimuli. When the color changed within a category, such as between two shades of green, brain activity remained steady. The team found the same pattern in six adult participants. The infants used both brain hemispheres to process color changes. Language areas are usually in the left hemisphere, so the finding provides further evidence that color categorization is not entirely dependent on language. At some point as a child grows, language must start playing a role—just ask a Russian whether a cloudless sky is the same color as the deep sea. The researchers hope to study that developmental process next. “Our results imply that the categorical color distinctions arise before the development of linguistic abilities,” says Jiale Yang, a psychologist at Chuo University and lead author of the study, published in February in PNAS. “But maybe they are later shaped by language learning.” © 2016 Scientific American

Keyword: Vision; Development of the Brain
Link ID: 22291 - Posted: 06.07.2016

James Gorman Fruit flies are far from human, but not as far as you might think. They do many of the same things people do, like seek food, fight and woo mates. And their brains, although tiny and not set up like those of humans or other mammals, do many of the same things that all brains do — make and use memories, integrate information from the senses, and allow the creature to navigate both the physical and the social world. Consequently, scientists who study how all brains work like to use flies because it’s easier for them to do invasive research that isn’t allowed on humans. The technology of neuroscience is sophisticated enough to genetically engineer fly brains, and to then use fluorescent chemicals to indicate which neurons are active. But there are some remaining problems, like how to watch the brain of a fly that is moving around freely. It is one thing to record what is going on in a fly’s brain if the insect’s movement is restricted, but quite another to try to catch the light flash of brain cells from a fly that is walking around. Takeo Katsuki, an assistant project scientist at the Kavli Institute at the University of California, San Diego, is interested in courtship. And, he said, fruit flies simply won’t engage in courtship when they are tethered. So he and Dhruv Grover, another assistant project scientist, and Ralph J. Greenspan, in whose lab they both work, set out to develop a method for recording the brain activity of a walking fly. One challenge was to track the fly as it moved. They solved that problem with three cameras to follow the fly and a laser to activate the fluorescent chemicals in the brain. © 2016 The New York Times Company

Keyword: Development of the Brain; Genes & Behavior
Link ID: 22290 - Posted: 06.06.2016

By Julia Shaw A cure for almost every memory ailment seems to be just around the corner. Alzheimer’s affected brains can have their memories restored, we can create hippocampal implants to give us better memory, and we can effectively implant false memories with light. Except that we can’t really do any of these things, at least not in humans. We sometimes forget that developments in memory science need to go through a series of stages in order to come to fruition, each of which requires tremendous knowledge and skill. From coming up with a new idea, to designing an appropriate methodology, obtaining ethical approval, getting research funding, recruiting research assistants and test subjects, conducting the experiment(s), completing complex statistical analysis for which computer code is often required, writing a manuscript, surviving the peer review process, and finally effectively distributing the findings, each part of the process is incredibly complex and takes a long time. On top of it all, this process, which can take decades to complete, typically results in incremental rather than monumental change. Rather than creating massive leaps in technology, in the vast majority of instances, studies add a teeny tiny bit of insight to the greater body of knowledge. These incremental achievements in science are often blown out of proportion by the media. As John Oliver recently said “…[Science] deserves better than to be twisted out of proportion and be turned into morning show gossip.” Moving from science fiction to science fact is harder than the media makes it seem. © 2016 Scientific American,

Keyword: Learning & Memory; Robotics
Link ID: 22289 - Posted: 06.06.2016

By Perri Klass, M.D. When girls come in for their physical exams, one of the questions I routinely ask is “Do you get your period?” I try to ask before I expect the answer to be yes, so that if a girl doesn’t seem to know about the changes of puberty that lie ahead, I can encourage her to talk about them with her mother, and offer to help answer questions. And I often point out that even those who have not yet embarked on puberty themselves are likely to have classmates who are going through these changes, so, again, it’s important to let kids know that their questions are welcome, and will be answered accurately. But like everybody else who deals with girls, I’m aware that this means bringing up the topic when girls are pretty young. Puberty is now coming earlier for many girls, with bodies changing in the third and fourth grade, and there is a complicated discussion about the reasons, from obesity and family stress to chemicals in the environment that may disrupt the normal effects of hormones. I’m not going to try to delineate that discussion here — though it’s an important one — because I want to concentrate on the effect, rather than the cause, of reaching puberty early. A large study published in May in the journal Pediatrics looked at a group of 8,327 children born in Hong Kong in April and May of 1997, for whom a great deal of health data has been collected. The researchers had access to the children’s health records, showing how their doctors had documented their physical maturity, according to what are known as the Tanner stages, for the standardized pediatric index of sexual maturation. Before children enter puberty, we call it Tanner I; for girls, Tanner II is the beginning of breast development, while for boys, it’s the enlargement of the scrotum and testes and the reddening and changing of the scrotum skin. Boys and girls then progress through the intermediate changes to stage V, full physical maturity. © 2016 The New York Times Company

Keyword: Depression; Hormones & Behavior
Link ID: 22288 - Posted: 06.06.2016

By ANNA FELS ONE of the most painful experiences of being a psychiatrist is having a patient for whom none of the available therapies or medications work. A while back, I was asked to do a consultation on just such a patient. This person had been a heroin addict in her early 20s. She had quit the opioid five years earlier, but her life was plagued with anxiety, apathy and self-doubt that prior treatments had not helped. At the end of the session, almost as an afterthought, she noted with irony that the only time in her adult life when she had been able to socialize easily and function at work was when she had been hooked on heroin. We are in the midst of a devastating and often lethal opioid epidemic, one of whose victims, we learned last week, was the pop star Prince. At such a time, it is hard to remember that there are multiple opioids naturally produced in our brains and required for our well-being. The neural circuitry utilizing these substances controls some of our most fundamental feelings of pain, stress and hopelessness, as well as pleasure and even euphoria. There is obviously a need for extreme caution, but research suggests that certain opioids may actually be useful in treating psychiatric diseases that have proved frustratingly unresponsive to current medications. It is the potentially addictive subset of opioids, whose natural ancestors were originally derived from poppies, that we associate with the word. These substances have been with us for most, if not all, of human civilization. Poppy seeds have been found at archaeological sites of Neolithic man. The Sumerians wrote about “the joy plant”; an Egyptian papyrus from the second millennium B.C. described the use of a product of poppies to stop the crying of children. Hippocrates suggested its use for female ailments, and a ninth-century Persian physician advocated the use of opium for melancholia. Millenniums later, during the American Civil War, the Union Army used 10 million opium pills to treat wounded soldiers. And then there were the two Opium Wars fought between China and Britain. Unquestionably, no other psychoactive substance has played such a central role in human affairs. © 2016 The New York Times Company

Keyword: Depression; Drug Abuse
Link ID: 22287 - Posted: 06.06.2016

By Karin Brulliard Think about how most people talk to babies: Slowly, simply, repetitively, and with an exaggerated tone. It’s one way children learn the uses and meanings of language. Now scientists have found that some adult birds do that when singing to chicks — and it helps the baby birds better learn their song. The subjects of the new study, published last week in the journal Proceedings of the National Academy of Sciences, were zebra finches. They’re good for this because they breed well in a lab environment, and “they’re just really great singers. They sing all the time,” said McGill University biologist and co-author Jon Sakata. The males, he means — they’re the singers, and they do it for fun and when courting ladies, as well as around baby birds. Never mind that their melody is more “tinny,” according to Sakata, than pretty. Birds in general are helpful for vocal acquisition studies because they, like humans, are among the few species that actually have to learn how to make their sounds, Sakata said. Cats, for example, are born knowing how to meow. But just as people pick up speech and bats learn their calls, birds also have to figure out how to sing their special songs. Sakata and his colleagues were interested in how social interactions between adult zebra finches and chicks influences that learning process. Is face-to-face — or, as it may be, beak-to-beak — learning better? Does simply hearing an adult sing work as well as watching it do so? Do daydreaming baby birds learn as well as their more focused peers? © 1996-2016 The Washington Post

Keyword: Language; Evolution
Link ID: 22286 - Posted: 06.06.2016

By LISA FELDMAN BARRETT WHEN the world gets you down, do you feel just generally “bad”? Or do you have more precise emotional experiences, such as grief or despair or gloom? In psychology, people with finely tuned feelings are said to exhibit “emotional granularity.” When reading about the abuses of the Islamic State, for example, you might experience creeping horror or fury, rather than general awfulness. When learning about climate change, you could feel alarm tinged with sorrow and regret for species facing extinction. Confronted with this year’s presidential campaign, you might feel astonished, exasperated or even embarrassed on behalf of the candidates — an emotion known in Mexico as “pena ajena.” Emotional granularity isn’t just about having a rich vocabulary; it’s about experiencing the world, and yourself, more precisely. This can make a difference in your life. In fact, there is growing scientific evidence that precisely tailored emotional experiences are good for you, even if those experiences are negative. According to a collection of studies, finely grained, unpleasant feelings allow people to be more agile at regulating their emotions, less likely to drink excessively when stressed and less likely to retaliate aggressively against someone who has hurt them. Perhaps surprisingly, the benefits of high emotional granularity are not only psychological. People who achieve it are also likely to have longer, healthier lives. They go to the doctor and use medication less frequently, and spend fewer days hospitalized for illness. Cancer patients, for example, have lower levels of harmful inflammation when they more frequently categorize, label and understand their emotions. © 2016 The New York Times Company

Keyword: Emotions
Link ID: 22285 - Posted: 06.06.2016

By DENISE GRADY Muhammad Ali, who died on Friday after a long struggle with Parkinson’s disease, was given the diagnosis in 1984 when he was 42. The world witnessed his gradual decline over the decades as tremors and stiffness set in, replacing his athletic stride with a shuffle, silencing his exuberant voice and freezing his face into an expressionless mask. What is Parkinson’s disease? It is a progressive, incurable deterioration of the part of the brain that produces a chemical needed to carry signals to the regions that control movement. How common is Parkinson’s? About one million people in the United States, and between seven million and 10 million worldwide, are thought to have Parkinson’s, according to the Parkinson’s Disease Foundation. What causes it? Was boxing a factor for Ali? The exact cause is not known. As with many disorders, experts suspect a combination of genes and environment, meaning that people with a particular genetic makeup may be predisposed to the disease if they are exposed to certain environmental factors. Head injuries, such as those sustained repeatedly in boxing, are among the possible risk factors listed by the National Parkinson Foundation. So is exposure to certain pesticides. These factors have both been suggested as possible contributors in Muhammad Ali’s case. Can Parkinson’s disease be treated? Medication can ease the symptoms for a time, but the disease continues to progress. In some cases, implanted devices called deep-brain stimulators can also help with symptoms. But Parkinson’s is not curable. © 2016 The New York Times Company

Keyword: Parkinsons
Link ID: 22284 - Posted: 06.06.2016

By JOHN ELIGON and SERGE F. KOVALESKI Prince, the music icon who struggled with debilitating hip pain during his career, died from an accidental overdose of self-administered fentanyl, a type of synthetic opiate, officials in Minnesota said Thursday. The news ended weeks of speculation about the sudden death of the musician, who had a reputation for clean living but who appears to have developed a dependency on medications to treat his pain. Authorities have yet to discuss how he came to be in possession of the fentanyl and whether it had been prescribed by a doctor. Officials had waited several weeks for the results of a toxicology test undertaken as part of an autopsy performed after he was found dead April 21 in an elevator at his estate. He was preparing to enroll in an opioid treatment program when he died at 57, according to the lawyer for a doctor who was planning to treat him. The Midwest Medical Examiner’s Office, which conducted the autopsy, declined to comment beyond releasing a copy of its findings. The Carver County Sheriff’s Office is continuing to investigate the death with help from the federal Drug Enforcement Administration. The sheriff’s office had said it was looking into whether opioid abuse was a factor, and a law enforcement official had said that painkillers were found on Prince when investigators arrived. “The M.E. report is one piece of the whole thing,” said Jason Kamerud, the county’s chief deputy sheriff. Fentanyl is a potent but dangerous painkiller, estimated to be more than 50 times more powerful than heroin, according to the Centers for Disease Control and Prevention. The report did not list how much fentanyl was found in Prince’s blood. Last year, federal officials issued an alert that said incidents and overdoses with fentanyl were “occurring at an alarming rate throughout the United States.” © 2016 The New York Times Company

Keyword: Pain & Touch; Drug Abuse
Link ID: 22283 - Posted: 06.04.2016

By Hanoch Ben-Yami Adam Bear opens his article, What Neuroscience Says about Free Will by mentioning a few cases such as pressing snooze on the alarm clock or picking a shirt out of the closet. He continues with an assertion about these cases, and with a question: In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? This is a bad start. To be aware of ourselves as free agents is not to have an experience. There’s no special tickle which tells you you’re free, no "freedom itch." Rather, to be aware of the fact that you acted freely is, among other things, to know that had you preferred to do something else in those circumstances, you would have done it. And in many circumstances we clearly know that this is the case, so in many circumstances we are aware that we act freely. No experience is involved, and so far there’s no question in Bear’s article for science to answer. Continuing with his alleged experience, Bear writes: …the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. More than a revolutionary proposal, this is an additional confusion. What might "intentionally willing an action" mean? Is it to be contrasted with non-intentionally willing an action? But what could this stand for? © 2016 Scientific American

Keyword: Consciousness
Link ID: 22282 - Posted: 06.04.2016