Links for Keyword: Robotics

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 241

Siobhan Roberts In May 2013, the mathematician Carina Curto attended a workshop in Arlington, Virginia, on “Physical and Mathematical Principles of Brain Structure and Function” — a brainstorming session about the brain, essentially. The month before, President Obama had issued one of his “Grand Challenges” to the scientific community in announcing the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), aimed at spurring a long-overdue revolution in understanding our three-pound organ upstairs. In advance of the workshop, the hundred or so attendees each contributed to a white paper addressing the question of what they felt was the most significant obstacle to progress in brain science. Answers ran the gamut — some probed more generally, citing the brain’s “utter complexity,” while others delved into details about the experimental technology. Curto, an associate professor at Pennsylvania State University, took a different approach in her entry, offering an overview of the mathematical and theoretical technology: A major obstacle impeding progress in brain science is the lack of beautiful models. Let me explain. … Many will agree that the existing (and impending) deluge of data in neuroscience needs to be accompanied by advances in computational and theoretical approaches — for how else are we to “make sense” of these data? What such advances should look like, however, is very much up to debate. … How much detail should we be including in our models? … How well can we defend the biological realism of our theories? All Rights Reserved © 2018

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 25108 - Posted: 06.20.2018

By Robert F. Service Prosthetics may soon take on a whole new feel. That’s because researchers have created a new type of artificial nerve that can sense touch, process information, and communicate with other nerves much like those in our own bodies do. Future versions could add sensors to track changes in texture, position, and different types of pressure, leading to potentially dramatic improvements in how people with artificial limbs—and someday robots—sense and interact with their environments. “It’s a pretty nice advance,” says Robert Shepherd, an organic electronics expert at Cornell University. Not only are the soft, flexible, organic materials used to make the artificial nerve ideal for integrating with pliable human tissue, but they are also relatively cheap to manufacture in large arrays, Shepherd says. Modern prosthetics are already impressive: Some allow amputees to control arm movement with just their thoughts; others have pressure sensors in the fingertips that help wearers control their grip without the need to constantly monitor progress with their eyes. But our natural sense of touch is far more complex, integrating thousands of sensors that track different types of pressure, such as soft and forceful touch, along with the ability to sense heat and changes in position. This vast amount of information is ferried by a network that passes signals through local clusters of nerves to the spinal cord and ultimately the brain. Only when the signals combine to become strong enough do they make it up the next link in the chain. © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 25048 - Posted: 06.01.2018

By Matthew Hutson As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing? Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated that we might expect an intelligent machine to suffer some of the same mental problems people do. Q: Why do you think AIs might get depressed and hallucinate? A: I’m drawing on the field of computational psychiatry, which assumes we can learn about a patient who’s depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn’t an AI be subject to the sort of things that go wrong with patients? Q: Might the mechanism be the same as it is in humans? A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong. © 2018 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity; Chapter 16: Psychopathology: Biological Basis of Behavior Disorders
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 12: Psychopathology: The Biology of Behavioral Disorders
Link ID: 24843 - Posted: 04.10.2018

BCIs have deep roots. In the 18th century Luigi Galvani discovered the role of electricity in nerve activity when he found that applying voltage could cause a dead frog’s legs to twitch. In the 1920s Hans Berger used electroencephalography to record human brain waves. In the 1960s José Delgado theatrically used a brain implant to stop a charging bull in its tracks. One of the field’s father figures is still hard at work in the lab. Eberhard Fetz was a post-doctoral researcher at the University of Washington in Seattle when he decided to test whether a monkey could control the needle of a meter using only its mind. A paper based on that research, published in 1969, showed that it could. Dr Fetz tracked down the movement of the needle to the firing rate of a single neuron in the monkey’s brain. The animal learned to control the activity of that single cell within two minutes, and was also able to switch to control a different neuron. Dr Fetz disclaims any great insights in setting up the experiment. “I was just curious, and did not make the association with potential uses of robotic arms or the like,” he says. But the effect of his paper was profound. It showed both that volitional control of a BCI was possible, and that the brain was capable of learning how to operate one without any help. Some 48 years later, Dr Fetz is still at the University of Washington, still fizzing with energy and still enthralled by the brain’s plasticity. He is particularly interested in the possibility of artificially strengthening connections between cells, and perhaps forging entirely new ones.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 24839 - Posted: 04.09.2018

Sara Reardon Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain. That achievement, described in Science Advances on 26 January1, is a key benchmark in the development of advanced computing devices designed to mimic biological systems. And it could open the door to more natural machine-learning software, although many hurdles remain before it could be used commercially. Artificial intelligence software has increasingly begun to imitate the brain. Algorithms such as Google’s automatic image-classification and language-learning programs use networks of artificial neurons to perform complex tasks. But because conventional computer hardware was not designed to run brain-like algorithms, these machine-learning tasks require orders of magnitude more computing power than the human brain does. “There must be a better way to do this, because nature has figured out a better way to do this,” says Michael Schneider, a physicist at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado, and a co-author of the study. NIST is one of a handful of groups trying to develop ‘neuromorphic’ hardware that mimics the human brain in the hope that it will run brain-like software more efficiently. In conventional electronic systems, transistors process information at regular intervals and in precise amounts — either 1 or 0 bits. But neuromorphic devices can accumulate small amounts of information from multiple sources, alter it to produce a different type of signal and fire a burst of electricity only when needed — just as biological neurons do. As a result, neuromorphic devices require less energy to run. © 2018 Macmillan Publishers Limited

Related chapters from BN8e: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory, Learning, and Development
Link ID: 24579 - Posted: 01.27.2018

Jules Montague Steve Thomas and I are talking about brain implants. Bonnie Tyler’s Holding Out For a Hero is playing in the background and for a moment I almost forget that a disease has robbed Steve of his speech. The conversation breaks briefly; now I see his wheelchair, his ventilator, his hospital bed. Steve, a software engineer, was diagnosed with ALS (amyotrophic lateral sclerosis, a type of motor neurone disease) aged 50. He knew it was progressive and incurable; that he would soon become unable to move and, in his case, speak. He is using eye-gaze technology to tell me this (and later to turn off the sound of Bonnie Tyler); cameras pick up light reflection from his eye as he scans a screen. Movements of his pupils are translated into movements of a cursor through infrared technology and the cursor chooses letters or symbols. A speech-generating device transforms these written words into spoken ones – and, in turn, sentences and stories form. Eye-gaze devices allow some people with limited speech or hand movements to communicate, use environmental controls, compose music, and paint. That includes patients with ALS – up to 80% have communication difficulties, cerebral palsy, strokes, multiple sclerosis and spinal cord injuries. It’s a far cry from Elle editor-in-chief Jean-Dominique Bauby, locked-in by a stroke in 1995, painstakingly blinking through letters on an alphabet board. His memoir, written at one word every two minutes, later became a film, The Diving Bell and the Butterfly. Although some still use low-tech options (not everyone can meet the physical or cognitive requirements for eye-gaze systems; occasionally, locked-in patients can blink but cannot move their eyes), speech-to-text and text-to-speech functionality on smartphones and tablets has revolutionised communication. © 2017 Guardian News and Media Limited

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 24200 - Posted: 10.16.2017

By Andrew Wagner Although it’s a far cry from the exosuits of science fiction, researchers have developed a robotic exoskeleton that can help stroke victims regain use of their legs. Nine out of 10 stroke patients are afflicted with partial paralysis, leaving some with an abnormal gait. The exosuit works by pulling cords attached to a shoe insole, providing torque to the ankle and correcting the abnormal walking motion. With the suit providing assistance to their joints, the stroke victims are able to maintain their balance, and walk similarly to the way they had prior to their paralysis, the team reports today in Science Translational Medicine. The exosuit is an adaptation of a previous design developed for the Defense Advanced Research Projects Agency Warrior Web program, a Department of Defense plan to develop assistive exosuits for military applications. Although similar mechanical devices have been built in the past to assist in gait therapy, these were bulky and had to be kept tethered to a power source. This new suit is light enough that with a decent battery, it could be used to help patients walk over terrain as well, not just on a treadmill. The researchers say that although the technology needs long-term testing, it could start to decrease the time it takes for stroke patients to recover in the near future. © 2017 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23881 - Posted: 07.27.2017

By Sam Wong People who have had amputations can control a virtual avatar using their imagination alone, thanks to a system that uses a brain scanner. Brain-computer interfaces, which translate neuron activity into computer signals, have been advancing rapidly, raising hopes that such technology can help people overcome disabilities such as paralysis or lost limbs. But it has been unclear how well this might work for people who have had limbs removed some time ago, as the brain areas that previously controlled these may become less active or repurposed for other uses over time. Ori Cohen at IDC Herzliya, in Israel, and colleagues have developed a system that uses an fMRI brain scanner to read the brain signals associated with imagining a movement. To see if it can work a while after someone has had a limb removed, they recruited three volunteers who had had an arm removed between 18 months and two years earlier, and four people who have not had an amputation. While lying in the fMRI scanner, the volunteers were shown an avatar on a screen with a path ahead of it, and instructed to move the avatar along this path by imagining moving their feet to move forward, or their hands to turn left or right. The people who had had arm amputations were able to do this just as well with their missing hand as they were with their intact hand. Their overall performance on the task was almost as good as of those people who had not had an amputation. © Copyright New Scientist Ltd.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23770 - Posted: 06.24.2017

By Matthew Hutson Artificial neural networks, computer algorithms that take inspiration from the human brain, have demonstrated fancy feats such as detecting lies, recognizing faces, and predicting heart attacks. But most computers can’t run them efficiently. Now, a team of engineers has designed a computer chip that uses beams of light to mimic neurons. Such “optical neural networks” could make any application of so-called deep learning—from virtual assistants to language translators—many times faster and more efficient. “It works brilliantly,” says Daniel Brunner, a physicist at the FEMTO-ST Institute in Besançon, France, who was not involved in the work. “But I think the really interesting things are yet to come.” Most computers work by using a series of transistors, gates that allow electricity to pass or not pass. But decades ago, physicists realized that light might make certain processes more efficient—for example, building neural networks. That’s because light waves can travel and interact in parallel, allowing them to perform lots of functions simultaneously. Scientists have used optical equipment to build simple neural nets, but these setups required tabletops full of sensitive mirrors and lenses. For years, photonic processing was dismissed as impractical. Now, researchers at the Massachusetts Institute of Technology (MIT) in Cambridge have managed to condense much of that equipment to a microchip just a few millimeters across. © 2017 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 23758 - Posted: 06.21.2017

By Edd Gent There’s been a lot of hype coming out of Silicon Valley about technology that can meld the human brain with machines. But how will this help society, and which companies are leading the charge? Elon Musk, chief executive of Tesla and SpaceX, made waves in March when he announced his latest venture, Neuralink, which would design what are called brain-computer interfaces. Initially, BCIs would be used for medical research, but the ultimate goal would be to prevent humans from becoming obsolete by enabling people to merge with artificial intelligence. Musk is not the only one who’s trying to bring humans closer to machines. Here are five organizations working hard on hacking the brain. According to Musk, the main barrier to human-machine co­operation is communication bandwidth. Because using a touch screen or a keyboard is a slow way to communicate with a computer, Musk’s new venture aims to create a “high-bandwidth” link between the brain and machines. What that system would look like is not entirely clear. Words such as “neural lace” and “neural dust” have been bandied about, but all that has really been revealed is a business model. Neuralink has been registered as a medical research company, and Musk said the firm will produce a product to help people with severe brain injuries within four years. This will lay the groundwork for developing BCIs for healthy people, enabling them to communicate by “consensual telepathy,” possibly within five years, Musk said. Some scientists, particularly those in neuroscience, are skeptical of Musk’s ambitious plans. © 1996-2017 The Washington Post

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23733 - Posted: 06.12.2017

Sarah Boseley Health editor A man who was paralysed from below the neck after crashing his bike into a truck can once again drink a cup of coffee and eat mashed potato with a fork, after a world-first procedure to allow him to control his hand with the power of thought. Bill Kochevar, 53, has had electrical implants in the motor cortex of his brain and sensors inserted in his forearm, which allow the muscles of his arm and hand to be stimulated in response to signals from his brain, decoded by computer. After eight years, he is able to drink and feed himself without assistance. “I think about what I want to do and the system does it for me,” Kochevar told the Guardian. “It’s not a lot of thinking about it. When I want to do something, my brain does what it does.” The experimental technology, pioneered by the Case Western Reserve University in Cleveland, Ohio, is the first in the world to restore brain-controlled reaching and grasping in a person with complete paralysis. For now, the process is relatively slow, but the scientists behind the breakthrough say this is proof of concept and that they hope to streamline the technology until it becomes a routine treatment for people with paralysis. In the future, they say, it will also be wireless and the electrical arrays and sensors will all be implanted under the skin and invisible.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 23423 - Posted: 03.29.2017

By Jackie Snow Last month, Facebook announced software that could simply look at a photo and tell, for example, whether it was a picture of a cat or a dog. A related program identifies cancerous skin lesions as well as trained dermatologists can. Both technologies are based on neural networks, sophisticated computer algorithms at the cutting edge of artificial intelligence (AI)—but even their developers aren’t sure exactly how they work. Now, researchers have found a way to "look" at neural networks in action and see how they draw conclusions. Neural networks, also called neural nets, are loosely based on the brain’s use of layers of neurons working together. Like the human brain, they aren't hard-wired to produce a specific result—they “learn” on training sets of data, making and reinforcing connections between multiple inputs. A neural net might have a layer of neurons that look at pixels and a layer that looks at edges, like the outline of a person against a background. After being trained on thousands or millions of data points, a neural network algorithm will come up with its own rules on how to process new data. But it's unclear what the algorithm is using from those data to come to its conclusions. “Neural nets are fascinating mathematical models,” says Wojciech Samek, a researcher at Fraunhofer Institute for Telecommunications at the Heinrich Hertz Institute in Berlin. “They outperform classical methods in many fields, but are often used in a black box manner.” © 2017 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 14: Attention and Consciousness
Link ID: 23329 - Posted: 03.08.2017

Sometimes the biggest gifts arrive in the most surprising ways. A couple in Singapore, Tianqiao Chen and Chrissy Luo, were watching the news and saw a Caltech scientist help a quadriplegic use his thoughts to control a robotic arm so that — for the first time in more than 10 years — he could sip a drink unaided. Inspired, Chen and Luo flew to Pasadena to meet the scientist, Richard Andersen, in person. Now they’ve given Caltech $115 million to shake up the way scientists study the brain in a new research complex. Construction of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech will begin as early as 2018 and bring together biology, engineering, chemistry, physics, computer science and the social sciences to tackle brain function in an integrated, comprehensive way, university officials announced Tuesday. The goal of connecting these traditionally separate departments is to make “transformational advances” that will lead to new scientific tools and medical treatments, the university said. Research in shared labs will include looking more deeply into fundamentals of the brain and exploring the complexities of sensation, perception, cognition and human behavior. Neuroscience research has advanced greatly in recent years, Caltech President Thomas Rosenbaum said. The field now has the tools to look at individual neurons, for example, as well as the computer power to analyze massive data sets and an entire system of neurons. Collaborating across traditional academic boundaries takes it to the next level, he said. “The tools are at a time and place where we think that the field is ready for that sort of combination.”

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 22960 - Posted: 12.07.2016

Scientists have developed a mind-controlled robotic hand that allows people with certain types of spinal injuries to perform everyday tasks such as using a fork or drinking from a cup. The low-cost device was tested in Spain on six people with quadriplegia affecting their ability to grasp or manipulate objects. By wearing a cap that measures electric brain activity and eye movement the users were able to send signals to a tablet computer that controlled the glove-like device attached to their hand. Participants in the small-scale study were able to perform daily activities better with the robotic hand than without, according to results published Tuesday in the journal Science Robotics. The principle of using brain-controlled robotic aids to assist people with quadriplegia isn't new. But many existing systems require implants, which can cause health problems, or use wet gel to transmit signals from the scalp to the electrodes. The gel needs to be washed out of the user's hair afterward, making it impractical in daily life. "The participants, who had previously expressed difficulty in performing everyday tasks without assistance, rated the system as reliable and practical, and did not indicate any discomfort during or after use," the researchers said. It took participants just 10 minutes to learn how to use the system before they were able to carry out tasks such as picking up potato chips or signing a document. ©2016 CBC/Radio-Canada.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 22959 - Posted: 12.07.2016

By R. Douglas Fields SAN DIEGO—A wireless device that decodes brain waves has enabled a woman paralyzed by locked-in syndrome to communicate from the comfort of her home, researchers announced this week at the annual meeting of the Society for Neuroscience. The 59-year-old patient, who prefers to remain anonymous but goes by the initials HB, is “trapped” inside her own body, with full mental acuity but completely paralyzed by a disease that struck in 2008 and attacked the neurons that make her muscles move. Unable to breathe on her own, a tube in her neck pumps air into her lungs and she requires round-the-clock assistance from caretakers. Thanks to the latest advance in brain–computer interfaces, however, HB has at least regained some ability to communicate. The new wireless device enables her to select letters on a computer screen using her mind alone, spelling out words at a rate of one letter every 56 seconds, to share her thoughts. “This is a significant achievement. Other attempts on such an advanced case have failed,” says neuroscientist Andrew Schwartz of the University of Pittsburgh, who was not involved in the study, published in The New England Journal of Medicine. HB’s mind is intact and the part of her brain that controls her bodily movements operates perfectly, but the signals from her brain no longer reach her muscles because the motor neurons that relay them have been damaged by amyotrophic lateral sclerosis (ALS), says neuroscientist Erick Aarnoutse, who designed the new device and was responsible for the technical aspects of the research. He is part of a team of physicians and scientists led by neuroscientist Nick Ramsey at Utrecht University in the Netherlands. Previously, the only way HB could communicate was via a system that uses an infrared camera to track her eye movements. But the device is awkward to set up and use for someone who cannot move, and it does not function well in many situations, such as in bright sunlight. © 2016 Scientific American,

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 14: Attention and Consciousness
Link ID: 22885 - Posted: 11.18.2016

By Jessica Hamzelou HB, who is paralysed by amyotrophic lateral sclerosis (ALS), has become the first woman to use a brain implant at home and in her daily life. She told New Scientist about her experiences using an eye-tracking device that takes about a minute to spell a word. What is your life like? All muscles are paralysed. I can only move my eyes. Why did you decide to try the implant? I want to contribute to possible improvements for people like me. What was the surgery like? The first surgery was no problem, but the second had a negative impact for my condition. Can you feel the implant at all? No. How easy is it to use? The hardware is easy to use. The software has been improved enormously by the UNP (Utrecht NeuroProsthesis) team. My part isn’t difficult anymore after these improvements. The most difficult part is timing the clicks. How has the implant changed your life? Now I can communicate outdoors when my eye track computer doesn’t work. I’m more confident and independent now outside. What are the best and worst things about it? The best is to go outside and be able to communicate. The worst were the false-positive clicks. But thanks to the UNP team that is fixed. Now that the study has been completed, would you like to keep the implant, or remove it? Of course I keep it. How do you feel about being the first person to have this implant? It’s special to be the first. Thinking ahead to the future, what else would you like to be able to do with the implant? I would like to change the television channel and my dream is to be able to drive my wheelchair. © Copyright Reed Business Information Ltd.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 22862 - Posted: 11.14.2016

By Helen Thomson IN THE 2009 Bruce Willis movie Surrogates, people live their lives by embodying themselves as robots. They meet people, go to work, even fall in love, all without leaving the comfort of their own home. Now, for the first time, three people with severe spinal injuries have taken the first steps towards that vision by controlling a robot thousands of kilometres away, using thought alone. The idea is that people with spinal injuries will be able to use robot bodies to interact with the world. It is part of the European Union-backed VERE project, which aims to dissolve the boundary between the human body and a surrogate, giving people the illusion that their surrogate is in fact their own body. In 2012, an international team went some way to achieving this by taking fMRI scans of the brains of volunteers while they thought about moving their hands or legs. The scanner measured changes in blood flow to the brain area responsible for such thoughts. An algorithm then passed these on as instructions to a robot. “The feeling of embodying the robot was good, although the sensation varied over time“ The volunteers could see what the robot was looking at via a head-mounted display. When they thought about moving their left or right hand, the robot moved 30 degrees to the left or right. Imagining moving their legs made the robot walk forward. © Copyright Reed Business Information Ltd.

Related chapters from BN8e: Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 22795 - Posted: 10.27.2016

Linda Geddes For the first time, a paralysed man has gained a limited sense of touch, thanks to an electric implant that stimulates his brain and allows him to feel pressure-like sensations in the fingers of a robotic arm. The advance raises the possibility of restoring limited sensation to various areas of the body, as well as giving people with spinal-cord injuries better control over prosthetic limbs. But restoring human-like feeling, such as sensations of heat or pain, will prove more challenging, the researchers say. Nathan Copeland had not been able to feel or move his legs and lower arms since a car accident snapped his neck and injured his spinal cord when he was 18. Now, some 12 years later, he can feel when a robotic arm has its fingers touched, because sensors on the fingers are linked to an implant in his brain. Brain implant restores paralysed man's sense of touch Rob Gaunt, a biomedical engineer at the University of Pittsburgh, performs a sensory test on a blindfolded Nathan Copeland. Nathan, who is paralysed, demonstrates his ability to feel by correctly identifying different fingers through a mind-controlled robotic arm. Video credit: UPMC/Pitt Health Sciences. “He says the sensations feel like they’re coming from his own hand,” says Robert Gaunt, a biomedical engineer at the University of Pittsburgh who led the study. © 2016 Macmillan Publishers Limited

Related chapters from BN8e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 22759 - Posted: 10.15.2016

By Edd Gent, A brain-inspired computing component provides the most faithful emulation yet of connections among neurons in the human brain, researchers say. The so-called memristor, an electrical component whose resistance relies on how much charge has passed through it in the past, mimics the way calcium ions behave at the junction between two neurons in the human brain, the study said. That junction is known as a synapse. The researchers said the new device could lead to significant advances in brain-inspired—or neuromorphic—computers, which could be much better at perceptual and learning tasks than traditional computers, as well as far more energy efficient. "In the past, people have used devices like transistors and capacitors to simulate synaptic dynamics, which can work, but those devices have very little resemblance to real biological systems. So it's not efficient to do it that way, and it results in a larger device area, larger energy consumption and less fidelity," said study leader Joshua Yang, a professor of electrical and computer engineering at the University of Massachusetts Amherst. [10 Things You Didn't Know About the Brain] Previous research has suggested that the human brain has about 100 billion neurons and approximately 1 quadrillion (1 million billion) synapses. A brain-inspired computer would ideally be designed to mimic the brain's enormous computing power and efficiency, scientists have said. © 2016 Scientific American

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 22705 - Posted: 09.28.2016

SINCE nobody really knows how brains work, those researching them must often resort to analogies. A common one is that a brain is a sort of squishy, imprecise, biological version of a digital computer. But analogies work both ways, and computer scientists have a long history of trying to improve their creations by taking ideas from biology. The trendy and rapidly developing branch of artificial intelligence known as “deep learning”, for instance, takes much of its inspiration from the way biological brains are put together. The general idea of building computers to resemble brains is called neuromorphic computing, a term coined by Carver Mead, a pioneering computer scientist, in the late 1980s. There are many attractions. Brains may be slow and error-prone, but they are also robust, adaptable and frugal. They excel at processing the sort of noisy, uncertain data that are common in the real world but which tend to give conventional electronic computers, with their prescriptive arithmetical approach, indigestion. The latest development in this area came on August 3rd, when a group of researchers led by Evangelos Eleftheriou at IBM’s research laboratory in Zurich announced, in a paper published in Nature Nanotechnology, that they had built a working, artificial version of a neuron. Neurons are the spindly, highly interconnected cells that do most of the heavy lifting in real brains. The idea of making artificial versions of them is not new. Dr Mead himself has experimented with using specially tuned transistors, the tiny electronic switches that form the basis of computers, to mimic some of their behaviour. © The Economist Newspaper Limited 2016.

Related chapters from BN8e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 22573 - Posted: 08.18.2016