Links for Keyword: Robotics
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Rachel Feltman Bioengineers have created the most realistic fake brain tissue ever – and it’s built like a jelly doughnut. The 3-D tissue, described in a paper published Monday in Proceedings of the National Academy of Sciences, is so structurally similar to a real rat brain (a common substitute for human brains in the lab) that it could help scientists answer longstanding questions about brain injuries and disease. Currently, the best way to study brain tissue is to grow neurons in a petri dish, but those neurons can only be grown flat. A real brain contains a complicated structure of 3-D tissue. Simply giving the neurons room to grow in three dimensions didn’t prove successful: While neurons will grow into more complicated structures in the right kind of gel, they don’t survive very long or mimic the structure of a real brain. Led by David Kaplan, the director of the Tissue Engineering Resource Center at Tufts University, researchers developed a new combination of materials to mimic the gray and white matter of the brain. The new model relies on a doughnut-shaped, spongy scaffold made of silk proteins with a collagen-based gel at the center. The outer scaffold layer, which is filled with rat neurons, acts as the grey matter of the brain. As the neurons grew networks throughout the scaffold, they sent branches out across the gel-filled center to connect with neurons on the other side. And that configuration is about as brain-like as lab-grown tissue can get. The basic structure can be reconfigured, too.
Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 19944 - Posted: 08.12.2014
by Aviva Rutkin What can the human brain do for a computer? There's at least one team of researchers that thinks it might have the answer. Working at IBM Research–Almaden in San Jose, California, they have just released more details of TrueNorth, a computer chip composed of one million digital "neurons". Under way for several years, the project abandons traditional computer architecture for one inspired by biological synapses and axons. The latest results, published in Science, provide a timely reminder of the promise of brain-inspired computing. The human brain still crushes any modern machines when it comes to tasks like vision or voice recognition. What's more, it manages to do so with less energy than it takes to power a light bulb. Building those qualities into a computer is an alluring prospect to many researchers, like Kwabena Boahen of Stanford University in California. "The first time I learned how computers worked, I thought it was ridiculous," he says. "I basically felt there had to be a better way." Aping the brain's structure could help us build computers that are far more powerful and efficient than today's, says TrueNorth team leader Dharmendra Modha. "We want to approximate the anatomy and physiology, the structure and dynamics of the brain, within today's silicon technology," he says. "I think that the chip and the associated ecosystem have the potential to transform science, technology, business, government and society." But how best to go about building a proper artificial brain is a matter of debate. © Copyright Reed Business Information Ltd
Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 19932 - Posted: 08.09.2014
By Jim Tankersley COLUMBUS, Ohio — First they screwed the end of the gray cord into the metal silo rising out of Ian Burkhart’s skull. Later they laid his right forearm across two foam cylinders, and they wrapped it with thin strips that looked like film from an old home movie camera. They ran him through some practice drills, and then it was time for him to try. If he succeeded at this next task, it would be science fiction come true: His thoughts would bypass his broken spinal cord. With the help of an algorithm and some electrodes, he would move his once-dead limb again — a scientific first. “Ready?” the young engineer, Nick Annetta, asked from the computer to his left. “Three. Two. One.” Burkhart, 23, marshaled every neuron he could muster, and he thought about his hand. 1 of 14 The last time the hand obeyed him, it was 2010 and Burkhart was running into the Atlantic Ocean. The hand had gripped the steering wheel as he drove the van from Ohio University to North Carolina’s Outer Banks, where he and friends were celebrating the end of freshman year. The hand unclenched to drop his towel on the sand. Burkhart splashed into the waves, the hand flying above his head, the ocean warm around his feet, the sun roasting his arms, and he dived. In an instant, he felt nothing. Not his hand. Not his legs. Only the breeze drying the saltwater on his face.
THE star of the World Cup may not be able to bend it like Beckham, but they might be able to kick a ball using the power of their mind. If all goes to plan, a paralysed young adult will use an exoskeleton controlled by their thoughtsMovie Camera to take the first kick of the football tournament in Thursday's opening ceremony in São Paulo, Brazil. The exoskeleton belongs to the Walk Again Project, an international collaboration using technology to overcome paralysis. Since December, the project has been training eight paralysed people to use the suit, which supports the lower body and is controlled by brain activity detected by a cap of electrodes placed over the head. The brain signals are sent to a computer, which converts them into movement. Lead robotic engineer Gordon Cheng, at the Technical University of Munich, Germany, says that there is a phenomenal amount of technology within the exoskeleton, including sensors that feed information about pressure and temperature back to the arms of the user, which still have sensation. The team hopes this will replicate to some extent the feeling of kicking a ball. The exoskeleton isn't the only technology on show in Brazil. FIFA has announced that fans will decide who is man of the match by voting for their favourite player on Twitter during the second half of each game using #ManOfTheMatch. © Copyright Reed Business Information Ltd.
By Kelly Servick During the World Cup next week, there may be 1 minute during the opening ceremony when the boisterous stadium crowd in São Paulo falls silent: when a paraplegic young person wearing a brain-controlled, robotic exoskeleton attempts to rise from a wheelchair, walk several steps, and kick a soccer ball. The neuroscientist behind the planned event, Miguel Nicolelis, is familiar with the spotlight. His lab at Duke University in Durham, North Carolina, pioneered brain-computer interfaces, using surgically implanted electrodes to read neural signals that can control robotic arms. Symbolically, the project is a homecoming for Nicolelis. He has portrayed it as a testament to the scientific progress and potential of his native Brazil, where he founded and directs the International Institute of Neuroscience of Natal. The press has showered him with attention, and the Brazilian government chipped in nearly $15 million in support. But scientifically, the project is a departure. Nicolelis first intended the exoskeleton to read signals from implanted electrodes, but decided instead to use a noninvasive, EEG sensor cap. That drew skepticism from Nicolelis’s critics—and he has a few—that the system wouldn’t really be a scientific advance. Others have developed crude EEG-based exoskeletons, they note, and it will be impossible to tell from the demo how this system compares. A bigger concern is that the event could generate false hope for paralyzed patients and give the public a skewed impression of the field’s progress. © 2014 American Association for the Advancement of Science
By MICHAEL BEHAR One morning in May 1998, Kevin Tracey converted a room in his lab at the Feinstein Institute for Medical Research in Manhasset, N.Y., into a makeshift operating theater and then prepped his patient — a rat — for surgery. A neurosurgeon, and also Feinstein Institute’s president, Tracey had spent more than a decade searching for a link between nerves and the immune system. His work led him to hypothesize that stimulating the vagus nerve with electricity would alleviate harmful inflammation. “The vagus nerve is behind the artery where you feel your pulse,” he told me recently, pressing his right index finger to his neck. The vagus nerve and its branches conduct nerve impulses — called action potentials — to every major organ. But communication between nerves and the immune system was considered impossible, according to the scientific consensus in 1998. Textbooks from the era taught, he said, “that the immune system was just cells floating around. Nerves don’t float anywhere. Nerves are fixed in tissues.” It would have been “inconceivable,” he added, to propose that nerves were directly interacting with immune cells. Nonetheless, Tracey was certain that an interface existed, and that his rat would prove it. After anesthetizing the animal, Tracey cut an incision in its neck, using a surgical microscope to find his way around his patient’s anatomy. With a hand-held nerve stimulator, he delivered several one-second electrical pulses to the rat’s exposed vagus nerve. He stitched the cut closed and gave the rat a bacterial toxin known to promote the production of tumor necrosis factor, or T.N.F., a protein that triggers inflammation in animals, including humans. “We let it sleep for an hour, then took blood tests,” he said. The bacterial toxin should have triggered rampant inflammation, but instead the production of tumor necrosis factor was blocked by 75 percent. “For me, it was a life-changing moment,” Tracey said. What he had demonstrated was that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick. “All the information is coming and going as electrical signals,” Tracey said. For months, he’d been arguing with his staff, whose members considered this rat project of his harebrained. “Half of them were in the hallway betting against me,” Tracey said. © 2014 The New York Times Company
Related chapters from BP7e: Chapter 11: Motor Control and Plasticity; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 19649 - Posted: 05.23.2014
By NATASHA SINGER Joseph J. Atick cased the floor of the Ronald Reagan Building and International Trade Center in Washington as if he owned the place. In a way, he did. He was one of the organizers of the event, a conference and trade show for the biometrics security industry. Perhaps more to the point, a number of the wares on display, like an airport face-scanning checkpoint, could trace their lineage to his work. A physicist, Dr. Atick is one of the pioneer entrepreneurs of modern face recognition. Having helped advance the fundamental face-matching technology in the 1990s, he went into business and promoted the systems to government agencies looking to identify criminals or prevent identity fraud. “We saved lives,” he said during the conference in mid-March. “We have solved crimes.” Thanks in part to his boosterism, the global business of biometrics — using people’s unique physiological characteristics, like their fingerprint ridges and facial features, to learn or confirm their identity — is booming. It generated an estimated $7.2 billion in 2012, according to reports by Frost & Sullivan. Making his rounds at the trade show, Dr. Atick, a short, trim man with an indeterminate Mediterranean accent, warmly greeted industry representatives at their exhibition booths. Once he was safely out of earshot, however, he worried aloud about what he was seeing. What were those companies’ policies for retaining and reusing consumers’ facial data? Could they identify individuals without their explicit consent? Were they running face-matching queries for government agencies on the side? Now an industry consultant, Dr. Atick finds himself in a delicate position. While promoting and profiting from an industry that he helped foster, he also feels compelled to caution against its unfettered proliferation. He isn’t so much concerned about government agencies that use face recognition openly for specific purposes — for example, the many state motor vehicle departments that scan drivers’ faces as a way to prevent license duplications and fraud. Rather, what troubles him is the potential exploitation of face recognition to identify ordinary and unwitting citizens as they go about their lives in public. Online, we are all tracked. But to Dr. Atick, the street remains a haven, and he frets that he may have abetted a technology that could upend the social order. © 2014 The New York Times Company
Related chapters from BP7e: Chapter 19: Language and Hemispheric Asymmetry; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 15: Language and Our Divided Brain; Chapter 14: Attention and Consciousness
Link ID: 19630 - Posted: 05.18.2014
By Dominic Basulto In last weekend’s Wall Street Journal, two leading brain researchers conjectured that as a result of rapid breakthroughs in fields such as molecular biology and neuroscience, one day “brain implants” will be just about as common as getting a bit of plastic surgery is today. In short, today’s tummy tucks are tomorrow’s brain tucks. Similar to what you’d expect from watching science fiction films such as “The Matrix,” these brain implants would enable you to learn foreign languages effortlessly, upgrade your memory capabilities, and, yes, help you to know Kung Fu. Vinton Cerf argues that today’s Internet (think Google) is already a form of cognitive implant, helping us to learn the answer to just about anything within seconds. If computing power continues to increase at the same rate as it has for the past 50 years, it is likely that a single computer will have the computing capacity of a human brain by 2023. By 2045, a single computer could have the processing capability of all human brains put together. Just think what you’d be able to use Google to do then. You wouldn’t even need to type in a search query, your brain would already know the answer. Of course, the ability to create these brain implants raises a number of philosophical, ethical and moral questions. If you’re a young student having a tough time in a boring class, why not just buy a brain module that simulates the often repetitive nature of learning? If you’re a parent of a child looking to get into a top university, why not buy a brain implant as a way to gain an advantage over children from less privileged backgrounds, especially when it’s SAT time? Instead of the digital divide, we may be talking about the cognitive divide at some point in the next two decades. Some parents would be able to afford a 99 percent percentile brain for their children, while others wouldn’t. © 1996-2014 The Washington Post
Related chapters from BP7e: Chapter 1: Biological Psychology: Scope and Outlook
Related chapters from MM:Chapter 1: An Introduction to Brain and Behavior
Link ID: 19391 - Posted: 03.21.2014
By Gary Marcus and Christof Koch What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain's hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought ("the French sun king") into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain? Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come. Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind's capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are. Neuroprosthetics aren't new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight. ©2014 Dow Jones & Company, Inc.
Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 5: The Sensorimotor System
Link ID: 19371 - Posted: 03.17.2014
by Clare Wilson A monkey controlling the hand of its unconscious cage-mate with its thoughts may sound like animal voodoo, but it is a step towards returning movement to people with spinal cord injuries. The hope is that people who are paralysed could have electrodes implanted in their brains that pick up their intended movements. These electrical signals could then be sent to a prosthetic limb, or directly to the person's paralysed muscles, bypassing the injury in their spinal cord. Ziv Williams at Harvard Medical School in Boston wanted to see if sending these signals to nerves in the spinal cord would also work, as this might ultimately give a greater range of movement from each electrode. His team placed electrodes in a monkey's brain, connecting them via a computer to wires going into the spinal cord of an anaesthetised, unconscious monkey. The unconscious monkey's limbs served as the equivalent of paralysed limbs. A hand of the unconscious monkey was strapped to a joystick, controlling a cursor that the other monkey could see on a screen. Williams's team had previously had the conscious monkey practise the joystick task for itself and had recorded its brain activity to work out which signals corresponded to moving the joystick back and forth. Through trial and error, they deduced which nerves to stimulate in the spinal cord of the anaesthetised monkey to produce similar movements in that monkey's hand. When both parts were fed to the computer, the conscious monkey was able to move the "paralysed" monkey's hand to make the cursor hit a target. © Copyright Reed Business Information Ltd.
by Erika Engelhaupt If you had to have a prosthetic hand, would you want it to look like a real hand? Or would you prefer a gleaming metallic number, something that doesn’t even try to look human? A new study looks at one of the issues that prosthetic designers and wearers face in making this decision: the creepy factor. People tend to get creeped out by robots or prosthetic devices that look almost, but not quite, human. So Ellen Poliakoff and colleagues at the University of Manchester in England had people rate the eeriness of various prosthetic hands. Forty-three volunteers looked at photographs of prosthetic and real hands. They rated both how humanlike (realistic) the hands were and how eerie they were, defined as “mysterious, strange, or unexpected as to send a chill up the spine.” Real human hands were rated both the most humanlike and the least eerie (a good thing for humans). Metal hands that were clearly mechanical were rated the least humanlike, but less eerie overall than prosthetic hands made to look like real hands, the team reports in the latest issue of Perception. The realistic prosthetics, like the rubber hand shown above, fell into what's known as the uncanny valley. That term, invented by roboticist Matsuhiro Mori in 1970, describes how robots become unnerving as they come to look more humanlike. The superrealistic Geminoid DK robot and the animated characters in the movie The Polar Express suffer from this problem. They look almost human, but not quite, and this mismatch between expectation and reality is one of the proposed explanations for the uncanny valley. In particular, if something looks like a human but doesn’t quite move like one, it’s often considered eerie. © Society for Science & the Public 2000 - 2013
Related chapters from BP7e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 5: The Sensorimotor System
Link ID: 18966 - Posted: 11.25.2013
M. Mitchell Waldrop Kwabena Boahen got his first computer in 1982, when he was a teenager living in Accra. “It was a really cool device,” he recalls. He just had to connect up a cassette player for storage and a television set for a monitor, and he could start writing programs. But Boahen wasn't so impressed when he found out how the guts of his computer worked. “I learned how the central processing unit is constantly shuffling data back and forth. And I thought to myself, 'Man! It really has to work like crazy!'” He instinctively felt that computers needed a little more 'Africa' in their design, “something more distributed, more fluid and less rigid”. Today, as a bioengineer at Stanford University in California, Boahen is among a small band of researchers trying to create this kind of computing by reverse-engineering the brain. The brain is remarkably energy efficient and can carry out computations that challenge the world's largest supercomputers, even though it relies on decidedly imperfect components: neurons that are a slow, variable, organic mess. Comprehending language, conducting abstract reasoning, controlling movement — the brain does all this and more in a package that is smaller than a shoebox, consumes less power than a household light bulb, and contains nothing remotely like a central processor. To achieve similar feats in silicon, researchers are building systems of non-digital chips that function as much as possible like networks of real neurons. Just a few years ago, Boahen completed a device called Neurogrid that emulates a million neurons — about as many as there are in a honeybee's brain. And now, after a quarter-century of development, applications for 'neuromorphic technology' are finally in sight. © 2013 Nature Publishing Group
Related chapters from BP7e: Chapter 17: Learning and Memory; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 18891 - Posted: 11.08.2013
Most of us don’t think twice when we extend our arms to hug a friend or push a shopping cart—our limbs work together seamlessly to follow our mental commands. For researchers designing brain-controlled prosthetic limbs for people, however, this coordinated arm movement is a daunting technical challenge. A new study showing that monkeys can move two virtual limbs with only their brain activity is a major step toward achieving that goal, scientists say. The brain controls movement by sending electrical signals to our muscles through nerve cells. When limb-connecting nerve cells are damaged or a limb is amputated, the brain is still able to produce those motion-inducing signals, but the limb can't receive them or simply doesn’t exist. In recent years, scientists have worked to create devices called brain-machine interfaces (BMIs) that can pick up these interrupted electrical signals and control the movements of a computer cursor or a real or virtual prosthetic. So far, the success of BMIs in humans has been largely limited to moving single body parts, such as a hand or an arm. Last year, for example, a woman paralyzed from the neck down for 10 years commanded a robotic arm to pick up and lift a piece of chocolate to her mouth just by thinking about it. But, "no device will ever work for people unless it restores bimanual behaviors,” says neuroscientist Miguel Nicolelis at Duke University in Durham, North Carolina, senior author of the paper. "You need to use both arms and hands for the simplest tasks.” In 2011, Nicolelis made waves by announcing on The Daily Show that he is developing a robotic, thought-controlled "exoskeleton" that will allow paralyzed people to walk again. Further raising the stakes, he pledged that the robotic body suit will enable a paralyzed person to kick a soccer ball during the opening ceremony of the 2014 Brazil World Cup. (Nicolelis is Brazilian and his research is partly funded by the nation’s government.) © 2013 American Association for the Advancement of Science
by NPR Staff Soon you'll be able to direct the path of a cockroach with a smartphone and the swipe of your finger. Greg Gage and his colleagues at Backyard Brains have developed a device called the that lets you control the path of an insect. It may make you squirm, but Gage says the device could inspire a new generation of neuroscientists. "The sharpest kids amongst us are probably going into other fields right now. And so we're kind of in the dark ages when it comes to neuroscience," he tells NPR's Arun Rath. He wants to get kids interested in neuroscience early enough to guide them toward that career path. And a cyborg cockroach might be the inspiration. "The neurons in the insects are very, very similar to the neurons inside the human brain," Gage says. "It's a beautiful way to just really understand what's happening inside your brain by looking at these little insects." The idea was spawned by a device the Backyard Brain-iacs developed called , which is capable of amplifying real living neurons. Insert a small wire into a cockroach's antennae, and you can hear the sound of actual neurons. "Lining the inside of the cockroach are these neurons that are picking up touch or vibration sensing, chemical sensing," Gage says. "They use it like a nose or a large tongue, their antennas, and they use it to sort of navigate the world. "So when you put a small wire inside of there, you can actually pick up the information as it's being encoded and being sent to the brain." With the RoboRoach device and smartphone app, you can interact with the antennae to influence the insect's behavior. ©2013 NPR
Related chapters from BP7e: Chapter 11: Motor Control and Plasticity; Chapter 1: Biological Psychology: Scope and Outlook
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 1: An Introduction to Brain and Behavior
Link ID: 18819 - Posted: 10.22.2013
Erika Check Hayden The power of thought alone is not enough to move inanimate objects — unless the object is a robotic leg wired to your brain, that is. A 32-year-old man whose knee and lower leg were amputated in 2009 after a motorcycle accident is apparently the first person with a missing lower limb to control a robotic leg with his mind. A team led by biomedical engineer Levi Hargrove at the Rehabilitation Institute of Chicago in Illinois reported the breakthrough last week in the New England Journal of Medicine1, including a video that shows the man using the bionic leg to walk up stairs and down a ramp, and to kick a football. The major advance is that the man does not have to use a remote-control switch or exaggerated muscle movements to tell the robotic leg to switch between types of movements, and he does not have to reposition the leg with his hands when seated, Hargrove says. “To our knowledge, this is the first time that neural signals have been used to control both a motorized knee and ankle prosthesis,” he says. Scientists had previously shown that paralysed people could move robotic arms using their thoughts and that able-bodied people can walk using robotic legs controlled by their brains (see, for example, go.nature.com/dgtykw). The latest work goes a step further by using muscle signals to amplify messages sent by the brain when the person intends to move. © 2013 Nature Publishing Group
by Colin Barras A man missing his lower leg has gained precise control over a prosthetic limb, just by thinking about moving it – all because his unused nerves were preserved during the amputation and rerouted to his thigh where they can be used to communicate with a robotic leg. The man can now seamlessly switch from walking on level ground to climbing stairs and can even kick a football around. During a traditional limb amputation, the main sensory nerves are severed and lose their function. In 2006, Todd Kuiken and his colleagues at the Rehabilitation Institute of Chicago in Illinois realised they could preserve some of that functionality by carefully rerouting sensory nerves during an amputation and attaching them to another part of the body. They could then use the rerouted nerve signals to control a robotic limb, allowing a person to control their prosthesis with the same nerves they originally used to control their real limb. Kuiken's team first attempted the procedure – which is called targeted muscle reinnervation (TMR) – on people who were having their arm amputated. Now, Kuiken's team has performed TMR for the first time on a man with a leg amputation. First, the team rerouted the two main branches of the man's sciatic nerve to muscles in the thigh above the amputation. One branch controls the calf and some foot muscles, the other controls the muscle running down the outside leg and some more foot muscles. © Copyright Reed Business Information Ltd
By Ingrid Wickelgren An attractive blonde in a bright red blouse sits in a wheelchair before the assembled scientists, doctors, writers and members of the community. We are in a conference room at the Aspen Meadows Resort, the site of the 2013 Aspen Brain Forum. Amanda Boxtel recalls what life was like for her at 24. She had been a skier, a runner and a ballet dancer, she tells us. She liked to hike in the wilderness. Pictures of a beautiful young woman appear on a screen. In the photos, she’s standing. Then one day on a slope, the tips of Boxtel’s skis crossed. She did a somersault and shattered four vertebrae. “I also shattered illusions of my immortality. I was paralyzed from here”—she hold her hands at her hips—“down. No movement and no sensation.” That life changed radically for her right then is difficult to dispute. But Boxtel eventually embraced a road to recovery. “It took time to turn wounds into wisdom. It took guts. This is a cruel injury. It is so much more than not being able to walk,” she tells us. With the aid of adaptive technology, she got back on her skis. She took up waterskiing, rock climbing, kayaking and hang gliding. But still, she couldn’t bear weight on her legs or walk. Walking seems easy to most of us, because the action is built-in; it is automatic. In reality, however, walking is a highly complex motion involving many different muscles that must contract in a precisely timed sequence. Once the spinal cord can no longer orchestrate this motion, it is exceedingly hard to replicate. Walking, for Boxtel, was arguably a pipe dream. And so she sat for 21 years. © 2013 Scientific American
By Athena Andreadis Recently, two studies surfaced almost simultaneously that led to exclamations of “Vulcan mind meld!”, “Zombie armies!” and “Brains in jars!” One is the announcement by Rajesh Rao and Andrea Stocco of Washington U. that they “achieved the first human-to-human brain interface”. The other is the Nature paper by Madeline Lancaster et al about stem-cell-derived “organoids” that mimic early developmental aspects of the human cortex. My condensed evaluation: the latter is far more interesting and promising than the former, which doesn’t quite do what people (want to) think it’s doing. The purported result of brain interfacing hit many hot buttons that have been staples of science fiction and Stephen King novels: primarily telepathy, with its fictional potential for non-consensual control. Essentially, the sender’s EEG (electroencephalogram) output was linked to the receiver’s TMS (transcranial magnetic stimulation) input. What the experiment actually did is not send a thought but induce a muscle twitch; nothing novel, given the known properties of the two technologies. The conditions were severely constrained to produce the desired result and I suspect the outcome was independent of the stimulus details: the EEG simply recorded that a signal had been produced and the TMS apparatus was positioned so that a signal would elicit a movement of the right hand. Since both sender and receiver were poised over a keyboard operating a video game, the twitch was sufficient to press the space bar, programmed by the game to fire a cannon. © 2013 Scientific American
Related chapters from BP7e: Chapter 11: Motor Control and Plasticity; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 18623 - Posted: 09.10.2013
American researchers say they’ve performed what they believe is the first ever human-to-human brain interface, where one person was able to send a brain signal to trigger the hand motions of another person. “It was both exciting and eerie to watch an imagined section from my brain get translated into actual action by another brain,” said Rajesh Rao, a professor of computer science and engineering at the University of Washington, in a statement. Previous studies have done brain-to-brain transmissions between rats and one was done between a human and a rat. Rao was able to send a brain signal through the internet – utilizing electrical brain recordings and a form of magnetic stimulation – to the other side of the university campus to his colleague Andrea Stocco, an assistant professor of psychology, triggering Stocco’s finger to move on a keyboard. “The internet was a way to connect computers, and now it can be a way to connect brains,” said Stocco. “We want to take the knowledge of a brain and transmit it directly from brain to brain.” On Aug. 12, Rao sat in his lab with a cap on his head. The cap had electrodes hooked up to an electroencephalography machine, which reads the brain’s electrical activity. Meanwhile, Stocco was at his lab across campus, wearing a similar cap which had a transcranial magnetic stimulation coil place over his left motor cortex – the part of the brain that controls hand movement. Rao looked at a computer and in his mind, he played a video game. When he was supposed to fire a cannon at a target, he imagined moving his right hand, which stayed motionless. Stocco, almost instantaneously, moved his right index finger to push the space bar on the keyboard in front of him. Only simple brain signals, not thoughts “This was basically a one-way flow of information from my brain to his,” said Rao. © CBC 2013
Related chapters from BP7e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 5: The Sensorimotor System
Link ID: 18583 - Posted: 08.29.2013
by Douglas Heaven It's a cognitive leap forward. IBM can now program an experimental chip they unveiled two years ago. The chips, designed to mimic how our brains work, are set to power computers that handle many streams of input data at once – much like the sensory input we deal with all the time. IBM's TrueNorth computer chips contain memory, processors and communication channels wired up like the synapses, neurons and axons of a brain. A key idea is that the chips can be hooked up into vast grids with many thousands working together in parallel. For certain types of task, such as quickly responding to large amounts of input data from sensors, they are much faster and less power-hungry than standard chips. They could one day replace human reflexes in self-driving cars or power the sensory systems of a robot, for example. But because the chips rewrite the rulebook for how computers are normally put together, they are not easy to program. Dharmendra Modha and his colleagues at IBM Research in San Jose, California, learned this the hard way. The team's first attempts were full of errors: "The programs were very unintuitive and extremely difficult to debug," says Modha. "Things looked hopeless." So they designed a new way of programming. This involves telling the computer how to yoke together the many individual chips in play at once. The IBM team came up with a way to package the functionality of each chip inside blocks of code they call "corelets". © Copyright Reed Business Information Ltd.