Chapter 16. None
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Quassim Cassam Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense. Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible. This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge. © 2014 The New York Times Company
Link ID: 20402 - Posted: 12.08.2014
By JOHN McWHORTER “TELL me, why should we care?” he asks. It’s a question I can expect whenever I do a lecture about the looming extinction of most of the world’s 6,000 languages, a great many of which are spoken by small groups of indigenous people. For some reason the question is almost always posed by a man seated in a row somewhere near the back. Asked to elaborate, he says that if indigenous people want to give up their ancestral language to join the modern world, why should we consider it a tragedy? Languages have always died as time has passed. What’s so special about a language? The answer I’m supposed to give is that each language, in the way it applies words to things and in the way its grammar works, is a unique window on the world. In Russian there’s no word just for blue; you have to specify whether you mean dark or light blue. In Chinese, you don’t say next week and last week but the week below and the week above. If a language dies, a fascinating way of thinking dies along with it. I used to say something like that, but lately I have changed my answer. Certainly, experiments do show that a language can have a fascinating effect on how its speakers think. Russian speakers are on average 124 milliseconds faster than English speakers at identifying when dark blue shades into light blue. A French person is a tad more likely than an Anglophone to imagine a table as having a high voice if it were a cartoon character, because the word is marked as feminine in his language. This is cool stuff. But the question is whether such infinitesimal differences, perceptible only in a laboratory, qualify as worldviews — cultural standpoints or ways of thinking that we consider important. I think the answer is no. Furthermore, extrapolating cognitive implications from language differences is a delicate business. In Mandarin Chinese, for example, you can express If you had seen my sister, you’d have known she was pregnant with the same sentence you would use to express the more basic If you see my sister, you know she’s pregnant. One psychologist argued some decades ago that this meant that Chinese makes a person less sensitive to such distinctions, which, let’s face it, is discomfitingly close to saying Chinese people aren’t as quick on the uptake as the rest of us. The truth is more mundane: Hypotheticality and counterfactuality are established more by context in Chinese than in English. © 2014 The New York Times Company
Link ID: 20401 - Posted: 12.08.2014
Carl Zimmer For thousands of years, fishermen knew that certain fish could deliver a painful shock, even though they had no idea how it happened. Only in the late 1700s did naturalists contemplate a bizarre possibility: These fish might release jolts of electricity — the same mysterious substance as in lightning. That possibility led an Italian physicist named Alessandro Volta in 1800 to build an artificial electric fish. He observed that electric stingrays had dense stacks of muscles, and he wondered if they allowed the animals to store electric charges. To mimic the muscles, he built a stack of metal disks, alternating between copper and zinc. Volta found that his model could store a huge amount of electricity, which he could unleash as shocks and sparks. Today, much of society runs on updated versions of Volta’s artificial electric fish. We call them batteries. Now a new study suggests that electric fish have anticipated other kinds of technology. The research, by Kenneth C. Catania, a biologist at Vanderbilt University, reveals a remarkable sophistication in the way electric eels deploy their shocks. Dr. Catania, who published the study on Thursday in the journal Science, found that the eels use short shocks like a remote control on their victims, flushing their prey out of hiding. And then they can deliver longer shocks that paralyze their prey at a distance, in precisely the same way that a Taser stops a person cold. “It shows how finely adapted eels are to attack prey,” said Harold H. Zakon, a biologist at the University of Texas at Austin, who was not involved in the study. He considered Dr. Catania’s findings especially impressive since scientists have studied electric eels for more than 200 years. © 2014 The New York Times Company
Link ID: 20400 - Posted: 12.06.2014
by Michael Slezak The elusive link between obesity and high blood pressure has been pinned down to the action of leptin in the brain, and we might be able to block it with drugs. We've known for more than 30 years that fat and high blood pressure are linked, but finding what ties them together has been difficult. One of the favourite candidates has been leptin – a hormone produced by fat cells. Under normal circumstances, when fat cells produce leptin, the hormone sends the message that you've had enough food. But in people with obesity, the body stops responding to this message, and large levels of leptin build up. Leptin is known to activate the regulatory network called the sympathetic nervous system, and it's the activation of sympathetic nerves on the kidneys that seem to be responsible for raising blood pressure. Leptin has thus been linked to blood pressure. However, conclusive evidence has been hard to come by. Michael Cowley of Monash University in Melbourne, Australia, and his colleagues have now conducted a string of experiments that provide some evidence. Through genetic and drug experiments in mice, they have pinpointed an area in the mouse brain that increases blood pressure when it is exposed to high leptin levels. This region is called the dorsomedial hypothalamus, and is thought to be involved in controlling energy consumption. Their findings show that high levels in leptin do indeed boost blood pressure, via this brain region. © Copyright Reed Business Information Ltd.
Link ID: 20398 - Posted: 12.06.2014
By Neuroskeptic | An important new study could undermine the concept of ‘endophenotypes’ – and thus derail one of the most promising lines of research in neuroscience and psychiatry. The findings are out now in Psychophysiology. Unusually, an entire special issue of the journal is devoted to presenting the various results of the study, along with commentary, but here’s the summary paper: Knowns and unknowns for psychophysiological endophenotypes by Minnesota researchers William Iacono, Uma Vaidyanathan, Scott Vrieze and Stephen Malone. In a nutshell, the researchers ran seven different genetic studies to try to find the genetic basis of a total of seventeen neurobehavioural traits, also known as ‘endophenotypes’. Endophenotypes are a hot topic in psychiatric neuroscience, although the concept is somewhat vague. The motivation behind interest in endophenotypes comes mainly from the failure of recent studies to pin down the genetic cause of most psychiatric syndromes: endophenotypes_A Essentially an endophenotype is some trait, which could be almost anything, which is supposed to be related to (or part of) a psychiatric disorder or symptom, but which is “closer to genetics” or “more biological” than the disorder itself. Rather than thousands of genes all mixed together to determine the risk of a psychiatric disorder, each endophenotype might be controlled by only a handful of genes – which would thus be easier to find.
by Viviane Callier It's a fresh problem. People who smoke menthol cigarettes often smoke more frequently and can be less likely to quit – and it could be because fresh-tasting menthol is changing their brains to more sensitive to nicotine. How menthol enhances nicotine addiction has been something of a mystery. Now, Brandon Henderson at the California Institute of Technology in Pasadena and his colleagues have shown that exposing mice to menthol alone causes them to develop more nicotinic receptors, the parts of the brain that are targeted by nicotine. Menthol can be used medically to relieve minor throat irritations, and menthol-flavoured cigarettes were first introduced in the 1920s. But smokers of menthol cigarettes can be less likely to quit. In one study of giving up smoking, 50 per cent of unflavoured-cigarette smokers were able to quit, while menthol smokers showed quitting rates as low as 23 per cent, depending on ethnicity. Over time, smokers of both menthol and unflavoured cigarettes acquire more receptors for nicotine, particularly in neurons involved in the body's neural pathways for reward and motivation. And research last year showed that smokers of menthol cigarettes develop even more of these receptors than smokers of unflavoured cigarettes. To understand how menthol may be altering the brain, Henderson's team exposed mice to either menthol with nicotine, or menthol alone. They found that, even without nicotine, menthol increased the numbers of brain nicotinic receptors. They saw a 78 per cent increase in one particular brain region – the ventral tegmental area – which is involved in the dopamine signalling pathway that mediates in addiction. © Copyright Reed Business Information Ltd.
Keyword: Drug Abuse
Link ID: 20395 - Posted: 12.06.2014
Injections of a new drug may partially relieve paralyzing spinal cord injuries, based on indications from a study in rats, which was partly funded by the National Institutes of Health. The results demonstrate how fundamental laboratory research may lead to new therapies. “We’re very excited at the possibility that millions of people could, one day, regain movements lost during spinal cord injuries,” said Jerry Silver, Ph.D., professor of neurosciences, Case Western Reserve University School of Medicine, Cleveland, and a senior investigator of the study published in Nature. Every year, tens of thousands of people are paralyzed by spinal cord injuries. The injuries crush and sever the long axons of spinal cord nerve cells, blocking communication between the brain and the body and resulting in paralysis below the injury. On a hunch, Bradley Lang, Ph.D., the lead author of the study and a graduate student in Dr. Silver’s lab, came up with the idea of designing a drug that would help axons regenerate without having to touch the healing spinal cord, as current treatments may require. “Originally this was just a side project we brainstormed in the lab,” said Dr. Lang. After spinal cord injury, axons try to cross the injury site and reconnect with other cells but are stymied by scarring that forms after the injury. Previous studies suggested their movements are blocked when the protein tyrosine phosphatase sigma (PTP sigma), an enzyme found in axons, interacts with chondroitin sulfate proteoglycans, a class of sugary proteins that fill the scars.
Link ID: 20394 - Posted: 12.04.2014
Ewen Callaway A shell found on Java in the late 1800s was recently found to bear markings that seem to have been carved intentionally half a million years ago. The photograph is about 15 millimetres wide. Expand A zigzag engraving on a shell from Indonesia is the oldest abstract marking ever found. But what is most surprising about the half-a-million-year-old doodle is its likely creator — the human ancestor Homo erectus. "This is a truly spectacular find and has the potential to overturn the way we look at early Homo," says Nick Barton, an archaeologist at the University of Oxford, UK, who was not involved in the discovery, which is described in a paper published online in Nature on 3 December1. By 40,000 years ago, and probably much earlier, anatomically modern humans — Homo sapiens — were painting on cave walls in places as far apart as Europe2 and Indonesia3. Simpler ochre engravings found in South Africa date to 100,000 years ago4. Earlier this year, researchers reported a 'hashtag' engraving in a Gibraltar cave once inhabited by Neanderthals5. That was the first evidence for drawing in any extinct species. But until the discovery of the shell engraving, nothing approximating art has been ascribed to Homo erectus. The species emerged in Africa about 2 million years ago and trekked as far as the Indonesian island of Java, before going extinct around 140,000 years ago. Most palaeoanthropologists consider the species to be the direct ancestor of both humans and Neanderthals. © 2014 Nature Publishing Group
Link ID: 20390 - Posted: 12.04.2014
Jia You Ever wonder how cockroaches scurry around in the dark while you fumble to switch on the kitchen light? Scientists know the insect navigates with its senses of touch and smell, but now they have found a new piece to the puzzle: A roach can also see its environment in pitch darkness, by pooling visual signals from thousands of light-sensitive cells in each of its compound eyes, known as photoreceptors. To test the sensitivity of roach vision, researchers created a virtual reality system for the bugs, knowing that when the environment around a roach rotates, the insect spins in the same direction to stabilize its vision. First, they placed the roach on a trackball, where it couldn’t navigate with its mouthpart or antennae. Then the scientists spun black and white gratings around the insect, illuminated by light at intensities ranging from a brightly lit room to a moonless night. The roach responded to its rotating environment in light as dim as 0.005 lux, when each of its photoreceptors was picking up only one photon every 10 seconds, the researchers report online today in The Journal of Experimental Biology. They suggest that the cockroach must rely on unknown neural processing in the deep ganglia, an area in the base of the brain involved in coordinating movements, to process such complex visual information. Understanding this mechanism could help scientists design better imaging systems for night vision. © 2014 American Association for the Advancement of Science.
Link ID: 20389 - Posted: 12.04.2014
Katharine Sanderson Although we do not have X-ray vision like Superman, we have what could seem to be another superpower: we can see infrared light — beyond what was traditionally considered the visible spectrum. A series of experiments now suggests that this little-known, puzzling effect could occur when pairs of infrared photons simultaneously hit the same pigment protein in the eye, providing enough energy to set in motion chemical changes that allow us to see the light. Received wisdom, and the known chemistry of vision, say that human eyes can see light with wavelengths between 400 (blue) and 720 nanometres (red). Although this range is still officially known as the 'visible spectrum', the advent of lasers with very specific infrared wavelengths brought reports that people were seeing laser light with wavelengths above 1,000 nm as white, green and other colours. Krzysztof Palczewski, a pharmacologist at Case Western Reserve University in Cleveland, Ohio, says that he has seen light of 1,050 nm from a low-energy laser. “You see it with your own naked eye,” he says. To find out whether this ability is common or a rare occurrence, Palczewski scanned the retinas of 30 healthy volunteers with a low-energy beam of light, and changed its wavelength. As the wavelength increased into the infrared (IR), participants found the light at first harder to detect, but at around 1,000 nm the light became easier to see. How humans can do this has puzzled scientists for years. Palczewski wanted to test two leading hypotheses to explain infrared vision. © 2014 Nature Publishing Group,
Link ID: 20388 - Posted: 12.03.2014
By CHRISTOPHER F. CHABRIS and DANIEL J. SIMONS NEIL DEGRASSE TYSON, the astrophysicist and host of the TV series “Cosmos,” regularly speaks to audiences on topics ranging from cosmology to climate change to the appalling state of science literacy in America. One of his staple stories hinges on a line from President George W. Bush’s speech to Congress after the 9/11 terrorist attacks. In a 2008 talk, for example, Dr. Tyson said that in order “to distinguish we from they” — meaning to divide Judeo-Christian Americans from fundamentalist Muslims — Mr. Bush uttered the words “Our God is the God who named the stars.” Dr. Tyson implied that President Bush was prejudiced against Islam in order to make a broader point about scientific awareness: Two-thirds of the named stars actually have Arabic names, given to them at a time when Muslims led the world in astronomy — and Mr. Bush might not have said what he did if he had known this fact. This is a powerful example of how our biases can blind us. But not in the way Dr. Tyson thought. Mr. Bush wasn’t blinded by religious bigotry. Instead, Dr. Tyson was fooled by his faith in the accuracy of his own memory. In his post-9/11 speech, Mr. Bush actually said, “The enemy of America is not our many Muslim friends,” and he said nothing about the stars. Mr. Bush had indeed once said something like what Dr. Tyson remembered; in 2003 Mr. Bush said, in tribute to the astronauts lost in the Columbia space shuttle explosion, that “the same creator who names the stars also knows the names of the seven souls we mourn today.” Critics pointed these facts out; some accused Dr. Tyson of lying and argued that the episode should call into question his reliability as a scientist and a public advocate. © 2014 The New York Times Company
Keyword: Learning & Memory
Link ID: 20387 - Posted: 12.03.2014
Katie Langin In the first couple of years after birth, sea lion sons seem to be more reliant on their mothers—consuming more milk and sticking closer to home—than sea lion daughters are, according to a study on Galápagos sea lions published in the December issue of the journal Animal Behaviour. The young males venture out to sea on occasion, but their female counterparts dive for their own food much more often. The curious thing is, it's not like the young males aren't capable of diving. As one-year-olds, males can dive to the same depth as females (33 feet, or 10 meters, on a typical dive). It's also not like their mother's milk is always on hand. Sea lion moms frequently leave their growing offspring for days at a time to find food at sea. (Watch a video of a Galápagos sea lion giving birth.) And yet, despite all this, for some reason sons are far less likely than daughters to take to the sea and seek out their own food. "We always saw the [young] males around the colony surfing in tide pools, pulling the tails of marine iguanas, resting, sleeping," said Paolo Piedrahita, a Ph.D. student at Bielefeld University in Germany and the lead author of the study. "It's amazing. You can see an animal—40 kilograms [88 pounds]—just resting, waiting for mom." © 1996-2014 National Geographic Society.
Close to 8 percent of Americans have depression of some kind, but only about a third of those are getting treated for it, a major federal survey finds. The most depressed group? Women ages 40 to 59. More than 12 percent of women that age say they're depressed. The least? Teenage boys. Just 4 percent of them have been diagnosed with depression. "During 2009-2012, 7.6 percent of Americans aged 12 and over had depression (moderate or severe depressive symptoms in the past 2 weeks)," Laura Pratt and Debra Brody of the National Center for Health Statistics wrote. "About 3 percent of Americans aged 12 and over had severe depressive symptoms," they added. "Of those with severe symptoms, 35 percent reported having contact with a mental health professional in the past year." This is troubling, because depression is difficult to treat and does best when people are given a combination of drugs and counseling. People living below the poverty level were more than twice as likely to have depression than people making more money. Almost 43 percent of people with severe depressive symptoms reported serious difficulties in work, home and social activities.
Link ID: 20383 - Posted: 12.03.2014
|By Ryan Bradley Five years ago Viviana Gradinaru was slicing thin pieces of mouse brain in a neurobiology lab, slowly compiling images of the two-dimensional slivers for a three-dimensional computer rendering. In her spare time, she would go to see the Body Worlds exhibit. She was especially fascinated by the “plasticized” remains of the human circulatory system on display. It struck her that much of what she was doing in the lab could be done more efficiently with a similar process. “Tissue clearing” has been around for more than a century, but existing methods involve soaking tissue samples in solvents, which is slow and usually destroys the fluorescent proteins necessary for marking certain cells of interest. To create a better approach, Gradinaru, at the time a graduate student, and her colleagues in neuroscientist Karl Deisseroth's lab focused on replacing the tissue's lipid molecules, which make it opaque.* To keep the tissue from collapsing, however, the replacement would need to give it structure, as lipids do. The first step was to euthanize a rodent and pump formaldehyde into its body, through its heart. Next they removed the skin and filled its blood vessels with acrylamide monomers, white, odorless, crystalline compounds. The monomers created a supportive hydrogel mesh, replacing the lipids and clearing the tissue. Before long, they could render an entire mouse body transparent in two weeks. Soon they were using transparent mice to map complete mouse nervous systems. The transparency made it possible for them to identify peripheral nerves—tiny bundles of nerves that are poorly understood—and to map the spread of viruses across the mouse's blood-brain barrier, which they did by marking the virus with a fluorescent agent, injecting it into the mouse's tail and watching it spread into the brain. © 2014 Scientific American
Keyword: Brain imaging
Link ID: 20382 - Posted: 12.03.2014
By Joyce Cohen Like many people, George Rue loved music. He played guitar in a band. He attended concerts often. In his late 20s, he started feeling a dull ache in his ears after musical events. After a blues concert almost nine years ago, “I left with terrible ear pain and ringing, and my life changed forever,” said Mr. Rue, 45, of Waterford, Conn. He perceived all but the mildest sounds as not just loud, but painful. It hurt to hear. Now, he has constant, burning pain in his ears, along with ringing, or tinnitus, so loud it’s “like a laser beam cutting a sheet of steel.” Everyday noise, like a humming refrigerator, adds a feeling of “needles shooting into my ears,” said Mr. Rue, who avoids social situations and was interviewed by email because talking by phone causes pain. Mr. Rue was given a diagnosis of hyperacusis, a nonspecific term that has assorted definitions, including “sound sensitivity,” “decreased sound tolerance,” and “a loudness tolerance problem.” But hyperacusis sometimes comes with ear pain, too, a poorly understood medical condition that is beginning to receive more serious attention. “This is clearly an emerging field,” said Richard Salvi of the Department of Communicative Disorders and Sciences at the University at Buffalo and a scientific adviser to Hyperacusis Research, a nonprofit group that funds research on the condition. “Further work is required to understand the symptoms, etiology and underlying neural mechanisms.” Loud noises, even when they aren’t painful, can damage both the sensory cells and sensory nerve fibers of the inner ear over time, causing hearing impairment, said M. Charles Liberman, a professor of otology at Harvard Medical School, who heads a hearing research lab at the Massachusetts Eye and Ear Infirmary. And for some people who are susceptible, possibly because of some combination of genes that gives them “tender” ears, noise sets in motion “an anomalous response,” he said. © 2014 The New York Times Company
Link ID: 20381 - Posted: 12.02.2014
|By David Z. Hambrick If you’ve spent more than about 5 minutes surfing the web, listening to the radio, or watching TV in the past few years, you will know that cognitive training—better known as “brain training”—is one of the hottest new trends in self improvement. Lumosity, which offers web-based tasks designed to improve cognitive abilities such as memory and attention, boasts 50 million subscribers and advertises on National Public Radio. Cogmed claims to be “a computer-based solution for attention problems caused by poor working memory,” and BrainHQ will help you “make the most of your unique brain.” The promise of all of these products, implied or explicit, is that brain training can make you smarter—and make your life better. Yet, according to a statement released by the Stanford University Center on Longevity and the Berlin Max Planck Institute for Human Development, there is no solid scientific evidence to back up this promise. Signed by 70 of the world’s leading cognitive psychologists and neuroscientists, the statement minces no words: "The strong consensus of this group is that the scientific literature does not support claims that the use of software-based “brain games” alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease." The statement also cautions that although some brain training companies “present lists of credentialed scientific consultants and keep registries of scientific studies pertinent to cognitive training…the cited research is [often] only tangentially related to the scientific claims of the company, and to the games they sell.” © 2014 Scientific American,
Keyword: Learning & Memory
Link ID: 20380 - Posted: 12.02.2014
By Nicholas Bakalar Short-term psychotherapy may be an effective way to prevent repeated suicide attempts. Using detailed Danish government health records, researchers studied 5,678 people who had attempted suicide and then received a program of short-term psychotherapy based on needs, including crisis intervention, cognitive therapy, behavioral therapy, and psychodynamic and psychoanalytic treatment. They compared them with 17,034 people who had attempted suicide but received standard care, including admission to a hospital, referral for treatment or discharge with no referral. They were able to match the groups in more than 30 genetic, health, behavioral and socioeconomic characteristics. The study is online in Lancet Psychiatry. Treatment focused on suicide prevention and comprised eight to 10 weeks of individual sessions. Over a 20-year follow-up, 16.5 percent of the treated group attempted suicide again, compared with 19.1 percent of the untreated group. In the treated group, 1.6 percent died by suicide, compared with 2.2 percent of the untreated. “Suicide is a rare event,” said the lead author, Annette Erlangsen, an associate professor at the Johns Hopkins Bloomberg School of Public Health, “and you need a huge sample to study it. We had that, and we were able to find a significant effect.” The authors estimate that therapy prevented 145 suicide attempts and 30 deaths by suicide in the group studied. © 2014 The New York Times Company
Link ID: 20379 - Posted: 12.02.2014
By Sarah C. P. Williams Craving a stiff drink after the holiday weekend? Your desire to consume alcohol, as well as your body’s ability to break down the ethanol that makes you tipsy, dates back about 10 million years, researchers have discovered. The new finding not only helps shed light on the behavior of our primate ancestors, but also might explain why alcoholism—or even the craving for a single drink—exists in the first place. “The fact that they could put together all this evolutionary history was really fascinating,” says Brenda Benefit, an anthropologist at New Mexico State University, Las Cruces, who was not involved in the study. Scientists knew that the human ability to metabolize ethanol—allowing people to consume moderate amounts of alcohol without getting sick—relies on a set of proteins including the alcohol dehydrogenase enzyme ADH4. Although all primates have ADH4, which performs the crucial first step in breaking down ethanol, not all can metabolize alcohol; lemurs and baboons, for instance, have a version of ADH4 that’s less effective than the human one. Researchers didn’t know how long ago people evolved the more active form of the enzyme. Some scientists suspected it didn’t arise until humans started fermenting foods about 9000 years ago. Matthew Carrigan, a biologist at Santa Fe College in Gainesville, Florida, and colleagues sequenced ADH4 proteins from 19 modern primates and then worked backward to determine the sequence of the protein at different points in primate history. Then they created copies of the ancient proteins coded for by the different gene versions to test how efficiently each metabolized ethanol. They showed that the most ancient forms of ADH4—found in primates as far back as 50 million years ago—only broke down small amounts of ethanol very slowly. But about 10 million years ago, the team reports online today in the Proceedings of the National Academy of Sciences, a common ancestor of humans, chimpanzees, and gorillas evolved a version of the protein that was 40 times more efficient at ethanol metabolism. © 2014 American Association for the Advancement of Science.
by Andy Coghlan What would Stewart Little make of it? Mice have been created whose brains are half human. As a result, the animals are smarter than their siblings. The idea is not to mimic fiction, but to advance our understanding of human brain diseases by studying them in whole mouse brains rather than in dishes. The altered mice still have mouse neurons – the "thinking" cells that make up around half of all their brain cells. But practically all the glial cells in their brains, the ones that support the neurons, are human. "It's still a mouse brain, not a human brain," says Steve Goldman of the University of Rochester Medical Center in New York. "But all the non-neuronal cells are human." Goldman's team extracted immature glial cells from donated human fetuses. They injected them into mouse pups where they developed into astrocytes, a star-shaped type of glial cell. Within a year, the mouse glial cells had been completely usurped by the human interlopers. The 300,000 human cells each mouse received multiplied until they numbered 12 million, displacing the native cells. "We could see the human cells taking over the whole space," says Goldman. "It seemed like the mouse counterparts were fleeing to the margins." Astrocytes are vital for conscious thought, because they help to strengthen the connections between neurons, called synapses. Their tendrils (see image) are involved in coordinating the transmission of electrical signals across synapses. © Copyright Reed Business Information Ltd.
By CATHERINE SAINT LOUIS Nearly 55 percent of infants nationwide are put to bed with soft blankets or covered by a comforter, even though such bedding raises the chances of suffocation or sudden infant death syndrome, federal researchers reported Monday. Their study, published in the journal Pediatrics, is the first to estimate how many infants sleep with potentially hazardous quilts, bean bags, blankets or pillows. Despite recommendations to avoid putting anything but a baby in a crib, two-thirds of black and Latino parents still use bedding that is both unnecessary and unsafe, the study also found. “I was startled a little bit by the number of people still using bedding in the sleep area,” said Dr. Michael Goodstein, a neonatologist in York, Pa., who serves on a task force on sleep-related infant deaths at the American Academy of Pediatrics. “Sleeping face down on soft bedding increases the risks of SIDS 21-fold.” Among the risk factors for SIDS, “bedding has fallen through the cracks,” said Dr. Thomas G. Keens, the chairman of the California SIDS Advisory Council. “This article is a wake-up call.” The new analysis looked at data gathered from 1993 to 2010 in the National Infant Sleep Position Study, which surveyed a random sample of nearly 19,000 parents by telephone. Use of infant bedding declined roughly 23 percent annually from 1993 to 2000. In recent years, however, the declines have slowed or stalled entirely. From 2001 to 2010, use of inappropriate bedding for white and Hispanic infants declined just 5 to 7 percent annually. There was no decline in the use of such bedding for black infants. Parents in the new study were not asked their reasons for using bedding. Previous research has found that they worry infants will be cold, or that the crib mattress is too hard. © 2014 The New York Times Company