Chapter 16. None
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Michael C. Corbalis In the quest to identify what might be unique to the human mind, one might well ask whether non-human animals have a theory of mind. In fiction, perhaps, they do. Eeyore, the morose donkey in Winnie-the-Pooh, at one point complains: ‘A little consideration, a little thought for others, makes all the difference.’ In real life, some animals do seem to show empathy toward others in distress. The primatologist Frans de Waal photographed a juvenile chimpanzee placing a consoling arm around an adult chimpanzee in distress after losing a fight, but suggests that monkeys do not do this. However, one study shows that monkeys won’t pull a chain to receive food if doing so causes a painful stimulus to be delivered to another monkey, evidently understanding that it will cause distress. Even mice, according to another study, react more intensely to pain if they perceive other mice in pain. It is often claimed that dogs show empathy toward their human owners, whereas cats do not. Cats don’t empathise—they exploit. Understanding what others are thinking, or what they believe, can be complicated, but perceiving emotion in others is much more basic to survival, and no doubt has ancient roots in evolution. Different emotions usually give different outward signs. In Shakespeare’s “Henry V,” the King recognises the signs of rage, urging his troops to . . . imitate the action of the tiger; Stiffen the sinews, summon up the blood, Disguise fair nature with hard-favour’d rage; Then lend the eye a terrible aspect . . . The human enemy will read the emotion of Henry’s troops, just as the antelope will read the emotion of the marauding tiger. Perhaps the best treatise on the outward signs of emotion is Charles Darwin’s “The Expression of the Emotions in Man and Animals,” which details the way fear and anger are expressed in cats and dogs, although he does not neglect the positive emotions: © 2015 Salon Media Group, Inc.
By FRANCES ROBLES MIAMI — A hazardous new synthetic drug originating in China is being blamed for 18 recent deaths in a single South Florida county, as police grapple with an inexpensive narcotic that causes exaggerated strength and dangerous paranoid hallucinations. On Thursday, the Fort Lauderdale police killed a man, reportedly high on the man-made street drug, alpha-PVP, known more commonly as flakka, who had held a woman hostage with a knife to her throat. The shooting of Javoris Washington, 29, was the latest in a series of volatile episodes that the police in South Florida have faced with highly aggressive drug users. Law enforcement agencies have had difficulty tamping down a surge in synthetic drugs, which were banned after becoming popular in clubs five years ago only to re-emerge deadlier than ever under new formulations. As soon as legislation catches up with the latest craze, manufacturers design a new drug to take its place, federal and local law enforcement agencies say. In Broward County, which includes Fort Lauderdale and is considered ground zero for the new drug, there have been 18 flakka-related fatalities since September, the chief medical examiner there said. “I have never seen such a rash of cases, all associated with the same substance,” said James N. Hall, an epidemiologist at Nova Southeastern University who has studied the Florida drug market for decades. “It’s probably the worst I have seen since the peak of crack cocaine. Rather than a drug, it’s really a poison.” © 2015 The New York Times Company
Keyword: Drug Abuse
Link ID: 20974 - Posted: 05.25.2015
by Helen Thomson A brain implant that can decode what someone wants to do has allowed a man paralysed from the neck down to control a robotic arm with unprecedented fluidity – and enjoy a beer at his own pace. Erik Sorto was left unable to move any of his limbs after an accident severed his spinal cord 12 years ago. People with similar injuries have previously controlled prosthetic limbs using implants placed in their motor cortex – an area of the brain responsible for the mechanics of movement. This is far from ideal because it results in delayed, jerky motions as the person thinks about all the individual aspects of the movement. When reaching for a drink, for example, they would have to think about moving their arm forward, then left, then opening their hand, then closing their hand around the cup and so on. Richard Andersen at the California Institute of Technology in Pasadena and his colleagues hoped they could achieve a more fluid movement by placing an implant in the posterior parietal cortex – a part of the brain involved in planning motor movements. "We thought this would allow us to decode brain activity associated with the overall goal of a movement – for example, 'I want to pick up that cup', rather than the individual components," said Anderson at the NeuroGaming Conference in San Francisco, California, where he presented the work this month. © Copyright Reed Business Information Ltd.
Link ID: 20972 - Posted: 05.23.2015
Nala Rogers Alzheimer’s disease may have evolved alongside human intelligence, researchers report in a paper posted this month on BioRxiv1. The study finds evidence that 50,000 to 200,000 years ago, natural selection drove changes in six genes involved in brain development. This may have helped to increase the connectivity of neurons, making modern humans smarter as they evolved from their hominin ancestors. But that new intellectual capacity was not without cost: the same genes are implicated in Alzheimer's disease. Kun Tang, a population geneticist at the Shanghai Institutes for Biological Sciences in China who led the research, speculates that the memory disorder developed as ageing brains struggled with new metabolic demands imposed by increasing intelligence. Humans are the only species known to develop Alzheimer's; the disease is absent even in closely related primate species such as chimpanzees. Tang and his colleagues searched modern human DNA for evidence of this ancient evolution. They examined the genomes of 90 people with African, Asian or European ancestry, looking for patterns of variation driven by changes in population size and natural selection. Marked by selection The analysis was tricky, because the two effects can mimic each other. To control for the effects of population changes ― thereby isolating the signatures of natural selection — the researchers estimated how population sizes changed over time. Then they identified genome segments that did not match up with the population history, revealing the DNA stretches that were most likely shaped by selection. © 2015 Nature Publishing Group
By Jason G. Goldman In 1970 child welfare authorities in Los Angeles discovered that a 14-year-old girl referred to as “Genie” had been living in nearly total social isolation from birth. An unfortunate participant in an unintended experiment, Genie proved interesting to psychologists and linguists, who wondered whether she could still acquire language despite her lack of exposure to it. Genie did help researchers better define the critical period for learning speech—she quickly acquired a vocabulary but did not gain proficiency with grammar—but thankfully, that kind of case study comes along rarely. So scientists have turned to surrogates for isolation experiments. The approach is used extensively with parrots, songbirds and hummingbirds, which, like us, learn how to verbally communicate over time; those abilities are not innate. Studying most vocal-learning mammals—for example, elephants, whales, sea lions—is not practical, so Tel Aviv University zoologists Yosef Prat, Mor Taub and Yossi Yovel turned to the Egyptian fruit bat, a vocal-learning species that babbles before mastering communication, as a child does. The results of their study, the first to raise bats in a vocal vacuum, were published this spring in the journal Science Advances. Five bat pups were reared by their respective mothers in isolation, so the pups heard no adult conversations. After weaning, the juveniles were grouped together and exposed to adult bat chatter through a speaker. A second group of five bats was raised in a colony, hearing their species' vocal interactions from birth. Whereas the group-raised bats eventually swapped early babbling for adult communication, the isolated bats stuck with their immature vocalizations well into adolescence. © 2015 Scientific American
I’m fairly new to San Francisco, so I’m still building my mental database of restaurants I like. But this weekend, I know exactly where I’m heading to for dinner: Nick’s Crispy Tacos. Then, when I get home, I’m kicking back to a documentary I’ve never heard of, a Mongolian drama called The Cave of the Yellow Dog. An artificially intelligent algorithm told me I’d enjoy both these things. I’d like the restaurant, the machine told me, because I prefer Mexican food and wine bars “with a casual atmosphere,” and the movie because “drama movies are in my digital DNA.” Besides, the title shows up around the web next to Boyhood, another film I like. Nara Logics, the company behind this algorithm, is the brainchild (pun intended) of its CTO and cofounder, Nathan Wilson, a former research scientist at MIT who holds a doctorate in brain and cognitive science. Wilson spent his academic career and early professional life immersed in studying neural networks—software that mimics how a human mind thinks and makes connections. Nara Logics’ brain-like platform, under development for the past five years, is the product of all that thinking.. The Cambridge, Massachusetts-based company includes on its board such bigwig neuroscientists as Sebastian Seung from Princeton, Mriganka Sur from MIT, and Emily Hueske of Harvard’s Center for Brain and Science. So what does all that neuroscience brain power have to offer the tech world, when so many Internet giants—from Google and Facebook to Microsoft and Baidu—already have specialized internal teams looking to push the boundaries of artificial intelligence? These behemoths use AI to bolster their online services, everything from on-the-fly translations to image recognition services. But to hear Wilson tell it, all that in-house work still leaves a large gap—namely, all the businesses and people who could benefit from access to an artificial brain but can’t build it themselves. “We’re building a pipeline, and taking insights out of the lab to intelligent, applied use cases,” Wilson tells WIRED. “Nara is AI for the people.”
Link ID: 20967 - Posted: 05.23.2015
Carl Zimmer Octopuses, squid and cuttlefish — a group of mollusks known as cephalopods — are the ocean’s champions of camouflage. Octopuses can mimic the color and texture of a rock or a piece of coral. Squid can give their skin a glittering sheen to match the water they are swimming in. Cuttlefish will even cloak themselves in black and white squares should a devious scientist put a checkerboard in their aquarium. Cephalopods can perform these spectacles thanks to a dense fabric of specialized cells in their skin. But before a cephalopod can take on a new disguise, it needs to perceive the background that it is going to blend into. Cephalopods have large, powerful eyes to take in their surroundings. But two new studies in The Journal Experimental Biology suggest that they have another way to perceive light: their skin. It’s possible that these animals have, in effect, evolved a body-wide eye. When light enters the eye of a cephalopod, it strikes molecules in the retina called opsins. The collision starts a biochemical reaction that sends an electric signal from the cephalopod’s eye to its brain. (We produce a related form of opsins in our eyes as well.) In 2010, Roger T. Hanlon, a biologist at the Marine Biological Laboratory in Woods Hole, Mass., and his colleagues reported that cuttlefish make opsins in their skin, as well. This discovery raised the tantalizing possibility that the animals could use their skin to sense light much as their eyes do. Dr. Hanlon teamed up with Thomas W. Cronin, a visual ecologist at the University of Maryland Baltimore County, and his colleagues to take a closer look. © 2015 The New York Times Company
by Karl Gruber "As clever as a guppy" is not a huge compliment. But intelligence does matter to these tropical fish: big-brained guppies are more likely to outwit predators and live longer than their dim-witted peers. Alexander Kotrschal at Stockholm University, Sweden, and his colleagues bred guppies (Poecilia reticulata) to have brains that were bigger or smaller than average. His team previously showed that bigger brains meant smarter fish. When put in an experimental stream with predators, big-brained females were eaten about 13 per cent less often than small-brained ones. There was no such link in males, and the researchers suspect that their bright colours may counter any benefits of higher intelligence. They did find, Kotrschal says , that large-brained males were faster swimmers and better at learning and remembering the location of a female. "This is exciting because it confirms a critical mechanism for brain size evolution," says Kotrschal. It shows, he adds, that interactions between predator and prey can affect brain size. It might seem obvious that bigger brains would help survival. Yet previous research simply found a correlation between the two, leaving the possibility open that some third factor may have been driving the effect. Now, direct brain size manipulation allowed Kotrschal's team to pin it down as a cause of better survival. "This is the first time anyone has tested whether a larger brain confers a survival benefit," says Kotrschal. "The fact that large-brained females survived better in a naturalistic setting is the first experimental proof that a larger brain is beneficial for the fitness of its bearer. This is like watching evolution happen and shows how brain size evolves." © Copyright Reed Business Information Ltd.
Stacey Vanek Smith I'm in a booth with a computer program called Ellie. She's on a screen in front of me. Ellie was designed to diagnose post-traumatic stress disorder and depression, and when I get into the booth she starts asking me questions — about my family, my feelings, my biggest regrets. Emotions seem really messy and hard for a machine to understand. But Skip Rizzo, a psychologist who helped design Ellie, thought otherwise. When I answer Ellie's questions, she listens. But she doesn't process the words I'm saying. She analyzes my tone. A camera tracks every detail of my facial expressions. The doctor may see you now "Contrary to popular belief, depressed people smile as many times as non-depressed people," Rizzo says. "But their smiles are less robust and of less duration. It's almost like polite smiles rather than real, robust, coming from your inner-soul type of a smile." Ellie compares my smile to a database of soldiers who have returned from combat. Is my smile genuine? Is it forced? Ellie also listens for pauses. She watches to see whether I look off to the side or down. If I lean forward, she notices. All this analysis seems to work: In studies, Ellie could detect signs of PTSD and depression about as well as a large pool of psychologists. Jody Mitic served with the Canadian forces in Afghanistan. He lost both of his feet to a bomb. And Mitic remembers that Ellie's robot-ness helped him open up. "Ellie seemed to just be listening," Mitic says. "A lot of therapists, you can see it in their eyes, when you start talking about some of the grislier details of stuff that you might have seen or done, they are having a reaction." © 2015 NPR
By Susan Cosier Once a memory is lost, is it gone forever? Most research points to yes. Yet a study published in the online journal eLife now suggests that traces of a lost memory might remain in a cell's nucleus, perhaps enabling future recall or at least the easy formation of a new, related memory. The current theory accepted by neurobiologists is that long-term memories live at synapses, which are the spaces where impulses pass from one nerve cell to another. Lasting memories are dependent on a strong network of such neural connections; memories weaken or fade if the synapses degrade. In the new study, researchers at the University of California, Los Angeles, studied sea slugs' neurons in a cell culture dish. Over several days the neurons spontaneously formed a number of synapses. The scientists then administered the neurotransmitter serotonin to the neurons, causing them to create many more synapses—the same process by which a living creature would form a long-term memory. When they inhibited a memory-forming enzyme and checked the neurons after 48 hours, the number of synapses had returned to the initial number—but they were not the same individual synapses as before. Some of the original and some of the new synapses retracted to create the exact number the cells started with. The finding is surprising because it suggests that a nerve cell body “knows” how many synapses it is supposed to form, meaning it is encoding a crucial part of memory. The researchers also ran a similar experiment on live sea slugs, in which they found that a long-term memory could be totally erased (as gauged by its synapses being destroyed) and then re-formed with only a small reminder stimulus—again suggesting that some information was being stored in a neuron's body. © 2015 Scientific American
Keyword: Learning & Memory
Link ID: 20958 - Posted: 05.20.2015
by Clare Wilson Does this qualify as irony? Our bodies need iron to be healthy – but too much could harm our brains by bringing on Alzheimer's disease. If that's the case, measuring people's brain iron levels could help identify those at risk of developing the disease. And since we already have drugs that lower iron, we may be able to put the brakes on. Despite intense efforts, the mechanisms behind this form of dementia are still poorly understood. For a long time the main suspect has been a protein called beta-amyloid, which forms distinctive plaques in the brain, but drugs that dissolve it don't result in people improving. Not so good ferrous Studies have suggested that people with Alzheimer's also have higher iron levels in their brains. Now it seems that high iron may hasten the disease's onset. Researchers at the University of Melbourne in Australia followed 144 older people who had mild cognitive impairment for seven years. To gauge how much iron was in their brains, they measured ferritin, a protein that binds to the metal, in their cerebrospinal fluid. For every nanogram per millilitre people had at the start of the study, they were diagnosed with Alzheimer's on average three months earlier. The team also found that the biggest risk gene for Alzheimer's, ApoE4, was strongly linked with higher iron, suggesting this is why carrying the gene makes you more vulnerable. Iron is highly reactive, so it probably subjects neurons to chemical stress, says team member Scott Ayton. © Copyright Reed Business Information Ltd
Link ID: 20957 - Posted: 05.20.2015
By PAM BELLUCKM The largest analysis to date of amyloid plaques in people’s brains confirms that the presence of the substance can help predict who will develop Alzheimer’s and determine who has the disease. Two linked studies, published Tuesday in JAMA, also support the central early role in Alzheimer’s of beta amyloid, the protein that creates plaques. Data from nearly 9,500 people on five continents shows that amyloid can appear 20 to 30 years before symptoms of dementia, that the vast majority of Alzheimer’s patients have amyloid and that the ApoE4 gene, known to increase Alzheimer’s risk, greatly accelerates amyloid accumulation. The findings also confirm that amyloid screening, by PET scan or cerebral spinal fluid test, can help identify people for clinical trials of drugs to prevent Alzheimer’s. Such screening is increasingly used in research. Experts say previous trials of anti-amyloid drugs on people with dementia failed because their brains were already too damaged or because some patients, not screened for amyloid, may not have had Alzheimer’s. “The papers indicate that amyloid imaging is important to be sure that the drugs are being tested on people who have amyloid,” said Dr. Roger Rosenberg, the director of the Alzheimer’s Disease Center at the University of Texas Southwestern Medical Center at Dallas, who wrote an editorial about the studies. Dr. Samuel Gandy, an Alzheimer’s researcher at Mount Sinai Hospital, who was not involved in the research, said doctors “can feel fairly confident that amyloid is due to Alzheimer’s.” But he and others cautioned against screening most people without dementia because there is not yet a drug that prevents or treats Alzheimer’s, and amyloid scans are expensive and typically not covered by insurance. © 2015 The New York Times Company
Link ID: 20956 - Posted: 05.20.2015
by Ashley Yeager New Caledonian crows are protective of their tools. The birds safeguard the sticks they use to find food and become even more careful with the tools as the cost of losing them goes up. Researchers videotaped captive and wild Corvus moneduloides crows and tracked what the birds did with their sticks. In between eating, the birds tucked the tools under their toes or left them in the holes they were probing. When higher up in the trees, the birds dropped the tools less often and were more likely to leave them in the holes they were probing than when they were on the ground. The finding, published May 20 in the Proceedings of the Royal Society B, shows how tool-protection tactics can prevent costly losses that could keep the crows from chowing down. © Society for Science & the Public 2000 - 2015
By James Gorman and Robin Lindsay Before human ancestors started making stone tools by chipping off flakes to fashion hand axes and other implements, their ancestors may have used plain old stones, as animals do now. And even that simple step required the intelligence to see that a rock could be used to smash open a nut or an oyster and the muscle control to do it effectively. Researchers have been rigorous in documenting every use of tools they have found find in animals, like crows, chimpanzees and dolphins. And they are now beginning to look at how tools are used by modern primates — part of the scientists’ search for clues about the evolution of the kind of delicate control required to make and use even the simplest hand axes. Monkeys do not exhibit human dexterity with tools, according to Madhur Mangalam of the University of Georgia, one of the authors of a recent study of how capuchin monkeys in Brazil crack open palm nuts. “Monkeys are working as blacksmiths,” he said, “They’re not working as goldsmiths.” But they are not just banging away haphazardly, either. Mr. Mangalam, a graduate student who is interested in “the evolution of precise movement,” reported in a recent issue of Current Biology on how capuchins handle stones. His adviser and co-author was Dorothy M. Fragaszy, the director of the Primate Behavior Laboratory at the university. Using video of the capuchins’ lifting rocks with both hands to slam them down on the hard palm nuts, he analyzed how high a monkey lifted a stone and how fast it brought it down. He found that the capuchins adjusted the force of a strike according to the condition of the nut after the previous strike. © 2015 The New York Times Company
by Michael Le Page Humble fungi and a home-brewing kit could soon do what the combined might of the West failed to – halt the thriving poppy industry in Afghanistan, the source of 80 per cent of the world's opium. Genetically engineered yeasts could make it easy to produce opiates such as morphine anywhere, cutting out the international drug smugglers and making such drugs cheap and more readily available. If home-brew drugs become widespread, it would make the Sisyphean nature of stopping the supply of illegal narcotics even more obvious than it is now. "It would be as disruptive to drug enforcement policy as it would be to crime syndicates," says Tanya Bubela, a public health researcher at the University of Alberta in Edmonton, Canada. "It may force the US to rethink its war on drugs." A growing number of drugs, scents and flavours once obtainable only from plants can now be made using genetically modified organisms. Researchers want to add opiates to that list because they are part of a family of molecules that may have useful medicinal properties (see box, below). Plant yields of many of these molecules are vanishingly small, and the chemicals are difficult and expensive to make in the lab. Getting yeast to pump them out would be far cheaper. Yeasts capable of doing this do not exist yet, but none of the researchers that New Scientist spoke to had any doubt that they soon will. "The field is moving much faster than we had previous realised," says John Dueber of the University of California, Berkeley, whose team has just created a yeast that produces the main precursor of opiates. Until recently, Dueber had thought the creation of, say, a morphine-making yeast was 10 years away. He now thinks a low-yielding strain could be made in two or three years.
Keyword: Drug Abuse
Link ID: 20951 - Posted: 05.19.2015
By Camille Bains, Imagine being able to see three times better than 20/20 vision without wearing glasses or contacts — even at age 100 or more — with the help of bionic lenses implanted in your eyes. Dr. Garth Webb, an optometrist in British Columbia who invented the Ocumetics Bionic Lens, says patients would have perfect vision and that driving glasses, progressive lenses and contact lenses would become a dim memory as the eye-care industry is transformed. Dr. Garth Webb says the bionic lens would allow people to see to infinity and replace the need for eyeglasses and contact lenses. (Darryl Dyck/Canadian Press) Webb says people who have the specialized lenses surgically inserted would never get cataracts because their natural lenses, which decay over time, would have been replaced. Perfect eyesight would result "no matter how crummy your eyes are," Webb says, adding the Bionic Lens would be an option for someone who depends on corrective lenses and is over about age 25, when the eye structures are fully developed. "This is vision enhancement that the world has never seen before," he says, showing a Bionic Lens, which looks like a tiny button. "If you can just barely see the clock at 10 feet, when you get the Bionic Lens you can see the clock at 30 feet away," says Webb, demonstrating how a custom-made lens that folded like a taco in a saline-filled syringe would be placed in an eye, where it would unravel itself within 10 seconds. He says the painless procedure, identical to cataract surgery, would take about eight minutes and a patient's sight would be immediately corrected. ©2015 CBC/Radio-Canada.
Link ID: 20950 - Posted: 05.19.2015
Monica Tan The age-old question of whether human traits are determined by nature or nurture has been answered, a team of researchers say. Their conclusion? It’s a draw. By collating almost every twin study across the world from the past 50 years, researchers determined that the average variation for human traits and disease is 49% due to genetic factors and 51% due to environmental factors. University of Queensland researcher Beben Benyamin from the Queensland Brain Institute collaborated with researchers at VU University of Amsterdam to collate 2,748 studies involving more than 14.5 million pairs of twins. “Twin studies have been conducted for more than 50 years but there is still some debate in terms of how much the variation is due to genetic or environmental factors,” Benyamin said. He said the study showed the conversation should move away from nature versus nature, instead looking at how the two work together. “Both are important sources of variation between individuals,” he said. While the studies averaged an almost even split between nature and nurture, there was wide variation within the 17,800 separate traits and diseases examined by the studies. For example, the risk for bipolar disorder was found to be 68% due to genetics and only 32% due to environmental factors. Weight maintenance was 63% due to genetics and 37% due to environmental factors. In contrast, risk for eating disorders was found to be 40% genetic and 60% environmental, whereas the risk for mental and behavioural disorders due to use of alcohol was 41% genetic and 59% environmental. © 2015 Guardian News and Media Limited
Keyword: Genes & Behavior
Link ID: 20948 - Posted: 05.19.2015
By Angus Chen Jumping spiders are the disco dancers of the arachnid world. The males thump and throb their brightly patterned legs and abdomens at the ladies like in the video above. Yet most of these bright colors should be impossible for the arachnids to see. That’s because their eyes have only two types of color-sensitive cone cells, which are designed to detect just ultraviolet and green light. Now, researchers report today in Current Biology that the North American genus of jumping spiders sees extra colors via a small, thin layer of red-pigmented cells partially covering the center of their retinas. The layer acts as a filter, allowing only red light to pass through and activate retinal cells just below the layer. This essentially converts a few of their green-sensitive cells into red-sensitive cells, allowing the spiders to build palates from three colors much the same way humans do—we have blue, green, and red cone cells. These jumping spiders have some limitations, though. Because their red filter is a small dot over the center of their retinas, they can see red only if they look directly at it. And because the filter blocks out any light that’s not red, anything that they look at has to be pretty bright before they can see any redness on it. Luckily for them, they like to spend time dancing in the sun. © 2015 American Association for the Advancement of Science
Link ID: 20947 - Posted: 05.19.2015
Jon Hamilton When Sam Swiller used hearing aids, his musical tastes ran to AC/DC and Nirvana – loud bands with lots of drums and bass. But after Swiller got a cochlear implant in 2005, he found that sort of music less appealing. "I was getting pushed away from sounds I used to love," he says, "but also being more attracted to sounds that I never appreciated before." So he began listening to folk and alternative music, including the Icelandic singer Bjork. There are lots of stories like this among people who get cochlear implants. And there's a good reason. A cochlear implant isn't just a fancy hearing aid. When his cochlear implant was first switched on, the world sounded different. "A hearing aid is really just an amplifier," says Jessica Phillips-Silver, a neuroscience researcher at Georgetown University. "The cochlear implant is actually bypassing the damaged part of the ear and delivering electrical impulses directly to the auditory nerve." As a result, the experience of listening to music or any other sound through the ear, with or without a hearing aid, can be completely unlike the experience of listening through a cochlear implant. "You're basically remapping the audio world," Swiller says. Swiller is 39 years old and lives in Washington, D.C. He was born with an inherited disorder that caused him to lose much of his hearing by his first birthday. That was in the 1970s, and cochlear implants were still considered experimental devices. So Swiller got hearing aids. They helped, but Swiller still wasn't hearing what other people were. © 2015 NPR
Backyard Brains. For 235 years we have been trying to isolate, understand, and analyze the elusive action potential, and here we tell the story that continues today. The progress of understanding Action Potentials can be classed into three main endeavors: 1. Amplification The amplifiers that gave us the first hint of the electrical impulses generated by neurons came from biological tissue itself! Scientists of the 18th and early 19th century used the contractions of muscles as "bioamplifiers" to indirectly measure neural firing. Using friction machines (spark generators), Leyden jars (primitive capacitors), or Voltaic Piles (the first batteries), electrical stimuli could be delivered to motor neurons that were still attached to muscles. The electrical stimulation would cause the nerve to fire action potentials (so people hypothesized), the muscle would then contract, and the force of contraction could be measured with a spring. With increasing electrical stimuli strength (thus more action potentials in the motor neurons), the muscle would contract with increasing force. This technique worked, but led to vigorous debates as to whether the neural tissue was actually generating its own action potentials at all, or whether the muscle contraction was just a direct result of electrical stimulation. By the mid-19th century, galvanometers had been invented, and it was possible to see that nerves were indeed generating their own action potentials. These galvanometers exploited the then new technology of electromagnets. For example, Emil de Bois-Reymond built by hand a type of galvanometer with 24,000 turns around an iron plate. When the nerve fired action potentials, a metal needle suspended by the plate would deflect. These devices worked, but the needle movement was not fast enough to separate the 1 ms individual action potentials, and the machines occupied a lot of time to construct. © 2009-2015 Backyard Brains
Link ID: 20944 - Posted: 05.18.2015