Links for Keyword: Robotics

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 268

I’m fairly new to San Francisco, so I’m still building my mental database of restaurants I like. But this weekend, I know exactly where I’m heading to for dinner: Nick’s Crispy Tacos. Then, when I get home, I’m kicking back to a documentary I’ve never heard of, a Mongolian drama called The Cave of the Yellow Dog. An artificially intelligent algorithm told me I’d enjoy both these things. I’d like the restaurant, the machine told me, because I prefer Mexican food and wine bars “with a casual atmosphere,” and the movie because “drama movies are in my digital DNA.” Besides, the title shows up around the web next to Boyhood, another film I like. Nara Logics, the company behind this algorithm, is the brainchild (pun intended) of its CTO and cofounder, Nathan Wilson, a former research scientist at MIT who holds a doctorate in brain and cognitive science. Wilson spent his academic career and early professional life immersed in studying neural networks—software that mimics how a human mind thinks and makes connections. Nara Logics’ brain-like platform, under development for the past five years, is the product of all that thinking.. The Cambridge, Massachusetts-based company includes on its board such bigwig neuroscientists as Sebastian Seung from Princeton, Mriganka Sur from MIT, and Emily Hueske of Harvard’s Center for Brain and Science. So what does all that neuroscience brain power have to offer the tech world, when so many Internet giants—from Google and Facebook to Microsoft and Baidu—already have specialized internal teams looking to push the boundaries of artificial intelligence? These behemoths use AI to bolster their online services, everything from on-the-fly translations to image recognition services. But to hear Wilson tell it, all that in-house work still leaves a large gap—namely, all the businesses and people who could benefit from access to an artificial brain but can’t build it themselves. “We’re building a pipeline, and taking insights out of the lab to intelligent, applied use cases,” Wilson tells WIRED. “Nara is AI for the people.”

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook
Related chapters from MM:Chapter 20:
Link ID: 20967 - Posted: 05.23.2015

by Jessica Hamzelou An exoskeleton that enables movement and provides tactile feedback has helped eight paralysed people regain sensation and move previously paralysed muscles "I FELT the ball!" yelled Juliano Pinto as he kicked off the Football World Cup in Brazil last year. Pinto, aged 29 at the time, lost the use of his lower body after a car accident in 2006. "It was the most moving moment," says Miguel Nicolelis at Duke University in North Carolina, head of the Walk Again Project, which developed the thought-controlled exoskeleton that enabled Pinto to make his kick. Since November 2013, Nicolelis and his team have been training Pinto and seven other people with similar injuries to use the exoskeleton – a robotic device that encases the limbs and converts brain signals into movement. The device also feeds sensory information to its wearer, which seems to have partially reawakened their nervous system. When Nicolelis reassessed his volunteers after a year of training, he found that all eight people had regained sensations and the ability to move muscles in their once-paralysed limbs. "Nobody expected it at all," says Nicolelis, who presented the results at the Brain Forum in Lausanne, Switzerland, on 31 March. "When we first saw the level of recovery, there was not a single person in the room with a dry eye." When a person's spinal cord is injured, the connection between body and brain can be damaged, leaving them unable to feel or move parts of their body. If a few spinal nerves remain, people can sometimes regain control over their limbs, although this can involve years of rehabilitation. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 20805 - Posted: 04.16.2015

|By Simon Makin People can control prosthetic limbs, computer programs and even remote-controlled helicopters with their mind, all by using brain-computer interfaces. What if we could harness this technology to control things happening inside our own body? A team of bioengineers in Switzerland has taken the first step toward this cyborglike setup by combining a brain-computer interface with a synthetic biological implant, allowing a genetic switch to be operated by brain activity. It is the world's first brain-gene interface. The group started with a typical brain-computer interface, an electrode cap that can register subjects' brain activity and transmit signals to another electronic device. In this case, the device is an electromagnetic field generator; different types of brain activity cause the field to vary in strength. The next step, however, is totally new—the experimenters used the electromagnetic field to trigger protein production within human cells in an implant in mice. The implant uses a cutting-edge technology known as optogenetics. The researchers inserted bacterial genes into human kidney cells, causing them to produce light-sensitive proteins. Then they bioengineered the cells so that stimulating them with light triggers a string of molecular reactions that ultimately produces a protein called secreted alkaline phosphatase (SEAP), which is easily detectable. They then placed the human cells plus an LED light into small plastic pouches and inserted them under the skin of several mice. © 2015 Scientific American

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 20770 - Posted: 04.08.2015

by Hal Hodson For a few days last summer, a handful of students walked through a park behind the University of Hannover in Germany. Each walked solo, but followed the same route as the others: made the same turns, walked the same distance. This was odd, because none of them knew where they were going. Instead, their steps were steered from a phone 10 paces behind them, which sent signals via bluetooth to electrodes attached to their legsMovie Camera. These stimulated the students' muscles, guiding their steps without any conscious effort. Max Pfeiffer of the University of Hannover was the driver. His project directs electrical currentMovie Camera into the students' sartorius, the longest muscle in the human body, which runs from the inside of the knee to the top of the outer thigh. When it contracts, it pulls the leg out and away from the body. To steer his test subjects left, Pfeiffer would zap their left sartorius, opening their gait and guiding them in that direction. Pfeiffer hopes his system will free people's minds up for other things as they navigate the world, allowing them to focus on their conversation or enjoy their surroundings. Tourists could keep their eyes on the sights while being imperceptibly guided around the city. Acceptance may be the biggest problem, although it is possible that the rise of wearable computing might help. Pfeiffer says the electrode's current causes a tingling sensation that diminishes the more someone uses the system. Volunteers said they were comfortable with the system taking control of their leg muscles, but only if they felt they could take control back. © Copyright Reed Business Information Ltd

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 20761 - Posted: 04.06.2015

Davide Castelvecchi Boots rigged with a simple spring-and-ratchet mechanism are the first devices that do not require power aids such as batteries to make walking more energy efficient. People walking in the boots expend 7% less energy than they do walking in normal shoes, the devices’ inventors report on 1 April in Nature1. That may not sound like much, but the mechanics of the human body have been shaped by millions of years of evolution, and some experts had doubted that there was room for further improvement in human locomotion, short of skating along on wheels. “It is the first paper of which I’m aware that demonstrates that a passive system can reduce energy expenditure during walking,” says Michael Goldfarb, a mechanical engineer at Vanderbilt University in Nashville, Tennessee, who develops exoskeletons for aiding people with disabilities. As early as the 1890s, inventors tried to boost the efficiency of walking by using devices such as rubber bands, says study co-author Gregory Sawicki, a biomedical engineer and locomotion physiologist at North Carolina State University in Raleigh. More recently, engineers have built unpowered exoskeletons that enable people to do tasks such as lifting heavier weights — but do not cut down the energy they expend. (Biomechanists still debate whether the running ‘blades’ made famous by South African sprinter Oscar Pistorius are more energetically efficient than human feet.2, 3) For their device, Sawicki and his colleagues built a mechanism that parallels human physiology. When a person swings a leg forward to walk, elastic energy is stored mostly in the Achilles tendon of their standing leg. That energy is released when the standing leg's foot pushes into the ground and the heel lifts off, propelling the body forwards. “There is basically a catapult in our ankle,” Sawicki says. © 2015 Nature Publishing Group

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 20750 - Posted: 04.02.2015

By Abby Phillip Jan Scheuermann, who has quadriplegia, brings a chocolate bar to her mouth using a robot arm guided by her thoughts. Research assistant Elke Brown watches in the background. (University of Pittsburgh Medical Center) Over at the Defense Advanced Research Projects Agency, also known as DARPA, there are some pretty amazing (and often top-secret) things going on. But one notable component of a DARPA project was revealed by a Defense Department official at a recent forum, and it is the stuff of science fiction movies. According to DARPA Director Arati Prabhakar, a paralyzed woman was successfully able use her thoughts to control an F-35 and a single-engine Cessna in a flight simulator. It's just the latest advance for one woman, 55-year-old Jan Scheuermann, who has been the subject of two years of groundbreaking neurosignaling research. First, Scheuermann began by controlling a robotic arm and accomplishing tasks such as feeding herself a bar of chocolate and giving high fives and thumbs ups. Then, researchers learned that -- surprisingly -- Scheuermann was able to control both right-hand and left-hand prosthetic arms with just the left motor cortex, which is typically responsible for controlling the right-hand side. After that, Scheuermann decided she was up for a new challenge, according to Prabhakar.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 20647 - Posted: 03.04.2015

Elizabeth Gibney DeepMind, the Google-owned artificial-intelligence company, has revealed how it created a single computer algorithm that can learn how to play 49 different arcade games, including the 1970s classics Pong and Space Invaders. In more than half of those games, the computer became skilled enough to beat a professional human player. The algorithm — which has generated a buzz since publication of a preliminary version in 2013 (V. Mnih et al. Preprint at http://arxiv.org/abs/1312.5602; 2013) — is the first artificial-intelligence (AI) system that can learn a variety of tasks from scratch given only the same, minimal starting information. “The fact that you have one system that can learn several games, without any tweaking from game to game, is surprising and pretty impressive,” says Nathan Sprague, a machine-learning scientist at James Madison University in Harrisonburg, Virginia. DeepMind, which is based in London, says that the brain-inspired system could also provide insights into human intelligence. “Neuroscientists are studying intelligence and decision-making, and here’s a very clean test bed for those ideas,” says Demis Hassabis, co-founder of DeepMind. He and his colleagues describe the gaming algorithm in a paper published this week (V. Mnih et al. Nature 518, 529–533; 2015. Games are to AI researchers what fruit flies are to biology — a stripped-back system in which to test theories, says Richard Sutton, a computer scientist who studies reinforcement learning at the University of Alberta in Edmonton, Canada. “Understanding the mind is an incredibly difficult problem, but games allow you to break it down into parts that you can study,” he says. But so far, most human-beating computers — such as IBM’s Deep Blue, which beat chess world champion Garry Kasparov in 1997, and the recently unveiled algorithm that plays Texas Hold ’Em poker essentially perfectly (see Nature http://doi.org/2dw; 2015)—excel at only one game. © 2015 Nature Publishing Group

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 20626 - Posted: 02.27.2015

by Hal Hodson Video: Bionic arm trumps flesh after elective amputation Bionic hands are go. Three men with serious nerve damage had their hands amputated and replaced by prosthetic ones that they can control with their minds. The procedure, dubbed "bionic reconstruction", was carried out by Oskar Aszmann at the Medical University of Vienna, Austria. The men had all suffered accidents which damaged the brachial plexus – the bundle of nerve fibres that runs from the spine to the hand. Despite attempted repairs to those nerves, the arm and hand remained paralysed. "But still there are some nerve fibres present," says Aszmann. "The injury is so massive that there are only a few. This is just not enough to make the hand alive. They will never drive a hand, but they might drive a prosthetic hand." This approach works because the prosthetic hands come with their own power source. Aszmann's patients plug their hands in to charge every night. Relying on electricity from the grid to power the hand means all the muscles and nerves need do is send the right signals to a prosthetic. Before the operation, Aszmann's patients had to prepare their bodies and brains. First he transplanted leg muscle into their arms to boost the signal from the remaining nerve fibres. Three months later, after the nerves had grown into the new muscle, the men started training their brains. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 20617 - Posted: 02.26.2015

By NICK BILTON Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence. The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable. Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent. Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control. In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry. But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease. Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Mr. Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.” © 2014 The New York Times Company

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 20: ; Chapter 13: Memory and Learning
Link ID: 20283 - Posted: 11.06.2014

|By Sandra Upson Jan Scheuermann is not your average experimental subject. Diagnosed with spinocerebellar degeneration, she is only able to move her head and neck. The paralysis, which began creeping over her muscles in 1996, has been devastating in many ways. Yet two years ago she seized an opportunity to turn her personal liability into an extraordinary asset for neuroscience. In 2012 Scheuermann elected to undergo brain surgery to implant two arrays of electrodes on her motor cortex, a band of tissue on the surface of the brain. She did so as a volunteer in a multi-year study at the University of Pittsburgh to develop a better brain-computer interface. When she visits the lab, researchers hook up her brain to a robotic arm and hand, which she practices moving using her thoughts alone. The goal is to eventually allow other paralyzed individuals to regain function by wiring up their brains directly to a computer or prosthetic limb. The electrodes in her head record the firing patterns of about 150 of her neurons. Specific patterns of neuronal activity encode her desire to perform different movements, such as swinging the arm to the left or clasping the fingers around a cup. Two thick cables relay the data from her neurons to a computer, where software can identify Scheuermann’s intentions. The computer can then issue appropriate commands to move the robotic limb. On a typical workday, Jan Scheuermann arrives at the university around 9:15 am. Using her chin, she maneuvers her electric wheelchair into a research lab headed by neuroscientist Andrew Schwartz and settles in for a day of work. Scientific American Mind spoke to Scheuermann to learn more about her experience as a self-proclaimed “guinea pig extraordinaire.” © 2014 Scientific American,

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 20276 - Posted: 11.04.2014

Linda Carroll TODAY contributor For years Larry Hester lived in darkness, his sight stolen by a disease that destroyed the photoreceptor cells in his retinas. But last week, through the help of a “bionic eye,” Hester got a chance to once again glimpse a bit of the world around him. Video: Larry Hester has been without sight for decades, but with the help of a new tool called the "bionic eye," researchers at Duke University have found a way to restore some of his sight. Hester is the seventh patient to receive an FDA-approved device that translates video signals into data the optic nerve can process. The images Hester and others “see” will be far from full sight, but experts hope it will be enough to give a little more autonomy to those who had previously been completely blind. Hester’s doctors at Duke University Eye Center believe that as time goes on the 66-year-old tire salesman from Raleigh, N.C., will be able to “see” more and more. After only five days, there has been remarkable progress. “I hope that [after some practice] he will be able to do things he can’t do today: maybe walk around a little more independently, see doorways or the straight line of a curb. We don’t expect him to be able to make out figures on TV. But we hope he’ll be more visually connected.” said Dr. Paul Hahn, an assistant professor of ophthalmology at the university in Durham.

Related chapters from BN: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 20261 - Posted: 11.01.2014

By JOHN MARKOFF STANFORD, Calif. — In factories and warehouses, robots routinely outdo humans in strength and precision. Artificial intelligence software can drive cars, beat grandmasters at chess and leave “Jeopardy!” champions in the dust. But machines still lack a critical element that will keep them from eclipsing most human capabilities anytime soon: a well-developed sense of touch. Consider Dr. Nikolas Blevins, a head and neck surgeon at Stanford Health Care who routinely performs ear operations requiring that he shave away bone deftly enough to leave an inner surface as thin as the membrane in an eggshell. Dr. Blevins is collaborating with the roboticists J. Kenneth Salisbury and Sonny Chan on designing software that will make it possible to rehearse these operations before performing them. The program blends X-ray and magnetic resonance imaging data to create a vivid three-dimensional model of the inner ear, allowing the surgeon to practice drilling away bone, to take a visual tour of the patient’s skull and to virtually “feel” subtle differences in cartilage, bone and soft tissue. Yet no matter how thorough or refined, the software provides only the roughest approximation of Dr. Blevins’s sensitive touch. “Being able to do virtual surgery, you really need to have haptics,” he said, referring to the technology that makes it possible to mimic the sensations of touch in a computer simulation. The software’s limitations typify those of robotics, in which researchers lag in designing machines to perform tasks that humans routinely do instinctively. Since the first robotic arm was designed at the Stanford Artificial Intelligence Laboratory in the 1960s, robots have learned to perform repetitive factory work, but they can barely open a door, pick themselves up if they fall, pull a coin out of a pocket or twirl a pencil. © 2014 The New York Times Company

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 20023 - Posted: 09.02.2014

By Rachel Feltman Bioengineers have created the most realistic fake brain tissue ever – and it’s built like a jelly doughnut. The 3-D tissue, described in a paper published Monday in Proceedings of the National Academy of Sciences, is so structurally similar to a real rat brain (a common substitute for human brains in the lab) that it could help scientists answer longstanding questions about brain injuries and disease. Currently, the best way to study brain tissue is to grow neurons in a petri dish, but those neurons can only be grown flat. A real brain contains a complicated structure of 3-D tissue. Simply giving the neurons room to grow in three dimensions didn’t prove successful: While neurons will grow into more complicated structures in the right kind of gel, they don’t survive very long or mimic the structure of a real brain. Led by David Kaplan, the director of the Tissue Engineering Resource Center at Tufts University, researchers developed a new combination of materials to mimic the gray and white matter of the brain. The new model relies on a doughnut-shaped, spongy scaffold made of silk proteins with a collagen-based gel at the center. The outer scaffold layer, which is filled with rat neurons, acts as the grey matter of the brain. As the neurons grew networks throughout the scaffold, they sent branches out across the gel-filled center to connect with neurons on the other side. And that configuration is about as brain-like as lab-grown tissue can get. The basic structure can be reconfigured, too.

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 4: Development of the Brain
Link ID: 19944 - Posted: 08.12.2014

by Aviva Rutkin What can the human brain do for a computer? There's at least one team of researchers that thinks it might have the answer. Working at IBM Research–Almaden in San Jose, California, they have just released more details of TrueNorth, a computer chip composed of one million digital "neurons". Under way for several years, the project abandons traditional computer architecture for one inspired by biological synapses and axons. The latest results, published in Science, provide a timely reminder of the promise of brain-inspired computing. The human brain still crushes any modern machines when it comes to tasks like vision or voice recognition. What's more, it manages to do so with less energy than it takes to power a light bulb. Building those qualities into a computer is an alluring prospect to many researchers, like Kwabena Boahen of Stanford University in California. "The first time I learned how computers worked, I thought it was ridiculous," he says. "I basically felt there had to be a better way." Aping the brain's structure could help us build computers that are far more powerful and efficient than today's, says TrueNorth team leader Dharmendra Modha. "We want to approximate the anatomy and physiology, the structure and dynamics of the brain, within today's silicon technology," he says. "I think that the chip and the associated ecosystem have the potential to transform science, technology, business, government and society." But how best to go about building a proper artificial brain is a matter of debate. © Copyright Reed Business Information Ltd

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 4: Development of the Brain
Link ID: 19932 - Posted: 08.09.2014

By Jim Tankersley COLUMBUS, Ohio — First they screwed the end of the gray cord into the metal silo rising out of Ian Burkhart’s skull. Later they laid his right forearm across two foam cylinders, and they wrapped it with thin strips that looked like film from an old home movie camera. They ran him through some practice drills, and then it was time for him to try. If he succeeded at this next task, it would be science fiction come true: His thoughts would bypass his broken spinal cord. With the help of an algorithm and some electrodes, he would move his once-dead limb again — a scientific first. “Ready?” the young engineer, Nick Annetta, asked from the computer to his left. “Three. Two. One.” Burkhart, 23, marshaled every neuron he could muster, and he thought about his hand. 1 of 14 The last time the hand obeyed him, it was 2010 and Burkhart was running into the Atlantic Ocean. The hand had gripped the steering wheel as he drove the van from Ohio University to North Carolina’s Outer Banks, where he and friends were celebrating the end of freshman year. The hand unclenched to drop his towel on the sand. Burkhart splashed into the waves, the hand flying above his head, the ocean warm around his feet, the sun roasting his arms, and he dived. In an instant, he felt nothing. Not his hand. Not his legs. Only the breeze drying the saltwater on his face.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 19770 - Posted: 06.25.2014

THE star of the World Cup may not be able to bend it like Beckham, but they might be able to kick a ball using the power of their mind. If all goes to plan, a paralysed young adult will use an exoskeleton controlled by their thoughtsMovie Camera to take the first kick of the football tournament in Thursday's opening ceremony in São Paulo, Brazil. The exoskeleton belongs to the Walk Again Project, an international collaboration using technology to overcome paralysis. Since December, the project has been training eight paralysed people to use the suit, which supports the lower body and is controlled by brain activity detected by a cap of electrodes placed over the head. The brain signals are sent to a computer, which converts them into movement. Lead robotic engineer Gordon Cheng, at the Technical University of Munich, Germany, says that there is a phenomenal amount of technology within the exoskeleton, including sensors that feed information about pressure and temperature back to the arms of the user, which still have sensation. The team hopes this will replicate to some extent the feeling of kicking a ball. The exoskeleton isn't the only technology on show in Brazil. FIFA has announced that fans will decide who is man of the match by voting for their favourite player on Twitter during the second half of each game using #ManOfTheMatch. © Copyright Reed Business Information Ltd.

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 19720 - Posted: 06.12.2014

By Kelly Servick During the World Cup next week, there may be 1 minute during the opening ceremony when the boisterous stadium crowd in São Paulo falls silent: when a paraplegic young person wearing a brain-controlled, robotic exoskeleton attempts to rise from a wheelchair, walk several steps, and kick a soccer ball. The neuroscientist behind the planned event, Miguel Nicolelis, is familiar with the spotlight. His lab at Duke University in Durham, North Carolina, pioneered brain-computer interfaces, using surgically implanted electrodes to read neural signals that can control robotic arms. Symbolically, the project is a homecoming for Nicolelis. He has portrayed it as a testament to the scientific progress and potential of his native Brazil, where he founded and directs the International Institute of Neuroscience of Natal. The press has showered him with attention, and the Brazilian government chipped in nearly $15 million in support. But scientifically, the project is a departure. Nicolelis first intended the exoskeleton to read signals from implanted electrodes, but decided instead to use a noninvasive, EEG sensor cap. That drew skepticism from Nicolelis’s critics—and he has a few—that the system wouldn’t really be a scientific advance. Others have developed crude EEG-based exoskeletons, they note, and it will be impossible to tell from the demo how this system compares. A bigger concern is that the event could generate false hope for paralyzed patients and give the public a skewed impression of the field’s progress. © 2014 American Association for the Advancement of Science

Related chapters from BN: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 19698 - Posted: 06.06.2014

By MICHAEL BEHAR One morning in May 1998, Kevin Tracey converted a room in his lab at the Feinstein Institute for Medical Research in Manhasset, N.Y., into a makeshift operating theater and then prepped his patient — a rat — for surgery. A neurosurgeon, and also Feinstein Institute’s president, Tracey had spent more than a decade searching for a link between nerves and the immune system. His work led him to hypothesize that stimulating the vagus nerve with electricity would alleviate harmful inflammation. “The vagus nerve is behind the artery where you feel your pulse,” he told me recently, pressing his right index finger to his neck. The vagus nerve and its branches conduct nerve impulses — called action potentials — to every major organ. But communication between nerves and the immune system was considered impossible, according to the scientific consensus in 1998. Textbooks from the era taught, he said, “that the immune system was just cells floating around. Nerves don’t float anywhere. Nerves are fixed in tissues.” It would have been “inconceivable,” he added, to propose that nerves were directly interacting with immune cells. Nonetheless, Tracey was certain that an interface existed, and that his rat would prove it. After anesthetizing the animal, Tracey cut an incision in its neck, using a surgical microscope to find his way around his patient’s anatomy. With a hand-held nerve stimulator, he delivered several one-second electrical pulses to the rat’s exposed vagus nerve. He stitched the cut closed and gave the rat a bacterial toxin known to promote the production of tumor necrosis factor, or T.N.F., a protein that triggers inflammation in animals, including humans. “We let it sleep for an hour, then took blood tests,” he said. The bacterial toxin should have triggered rampant inflammation, but instead the production of tumor necrosis factor was blocked by 75 percent. “For me, it was a life-changing moment,” Tracey said. What he had demonstrated was that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick. “All the information is coming and going as electrical signals,” Tracey said. For months, he’d been arguing with his staff, whose members considered this rat project of his harebrained. “Half of them were in the hallway betting against me,” Tracey said. © 2014 The New York Times Company

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 19649 - Posted: 05.24.2014

By NATASHA SINGER Joseph J. Atick cased the floor of the Ronald Reagan Building and International Trade Center in Washington as if he owned the place. In a way, he did. He was one of the organizers of the event, a conference and trade show for the biometrics security industry. Perhaps more to the point, a number of the wares on display, like an airport face-scanning checkpoint, could trace their lineage to his work. A physicist, Dr. Atick is one of the pioneer entrepreneurs of modern face recognition. Having helped advance the fundamental face-matching technology in the 1990s, he went into business and promoted the systems to government agencies looking to identify criminals or prevent identity fraud. “We saved lives,” he said during the conference in mid-March. “We have solved crimes.” Thanks in part to his boosterism, the global business of biometrics — using people’s unique physiological characteristics, like their fingerprint ridges and facial features, to learn or confirm their identity — is booming. It generated an estimated $7.2 billion in 2012, according to reports by Frost & Sullivan. Making his rounds at the trade show, Dr. Atick, a short, trim man with an indeterminate Mediterranean accent, warmly greeted industry representatives at their exhibition booths. Once he was safely out of earshot, however, he worried aloud about what he was seeing. What were those companies’ policies for retaining and reusing consumers’ facial data? Could they identify individuals without their explicit consent? Were they running face-matching queries for government agencies on the side? Now an industry consultant, Dr. Atick finds himself in a delicate position. While promoting and profiting from an industry that he helped foster, he also feels compelled to caution against its unfettered proliferation. He isn’t so much concerned about government agencies that use face recognition openly for specific purposes — for example, the many state motor vehicle departments that scan drivers’ faces as a way to prevent license duplications and fraud. Rather, what troubles him is the potential exploitation of face recognition to identify ordinary and unwitting citizens as they go about their lives in public. Online, we are all tracked. But to Dr. Atick, the street remains a haven, and he frets that he may have abetted a technology that could upend the social order. © 2014 The New York Times Company

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 14: Attention and Higher Cognition
Link ID: 19630 - Posted: 05.19.2014

By Dominic Basulto In last weekend’s Wall Street Journal, two leading brain researchers conjectured that as a result of rapid breakthroughs in fields such as molecular biology and neuroscience, one day “brain implants” will be just about as common as getting a bit of plastic surgery is today. In short, today’s tummy tucks are tomorrow’s brain tucks. Similar to what you’d expect from watching science fiction films such as “The Matrix,” these brain implants would enable you to learn foreign languages effortlessly, upgrade your memory capabilities, and, yes, help you to know Kung Fu. Vinton Cerf argues that today’s Internet (think Google) is already a form of cognitive implant, helping us to learn the answer to just about anything within seconds. If computing power continues to increase at the same rate as it has for the past 50 years, it is likely that a single computer will have the computing capacity of a human brain by 2023. By 2045, a single computer could have the processing capability of all human brains put together. Just think what you’d be able to use Google to do then. You wouldn’t even need to type in a search query, your brain would already know the answer. Of course, the ability to create these brain implants raises a number of philosophical, ethical and moral questions. If you’re a young student having a tough time in a boring class, why not just buy a brain module that simulates the often repetitive nature of learning? If you’re a parent of a child looking to get into a top university, why not buy a brain implant as a way to gain an advantage over children from less privileged backgrounds, especially when it’s SAT time? Instead of the digital divide, we may be talking about the cognitive divide at some point in the next two decades. Some parents would be able to afford a 99 percent percentile brain for their children, while others wouldn’t. © 1996-2014 The Washington Post

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook
Related chapters from MM:Chapter 20:
Link ID: 19391 - Posted: 03.21.2014