Links for Keyword: Vision

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 803

By Sandra G. Boodman, Amy Epstein Gluck remembers how relieved she felt when it seemed that the vision of her youngest child, 9-month-old Sam, might turn out to be normal. Months earlier, doctors had worried that he was blind, possibly as the result of an inherited disorder or a brain tumor. But subsequent tests and consultations with pediatric specialists in Washington and Baltimore instead suggested a temporary developmental delay. Epstein Gluck and her husband, Ira Gluck, were so thrilled with Sam’s progress that they threw a big party to celebrate the end of an arduous year and, they hoped, their son’s frightening problem. But two months later, on Sam’s first birthday in February 2006, the pediatric ophthalmologist who had been treating him delivered news that made it clear a celebration had been premature. “It was such a blow,” Epstein Gluck recalled. On the way to Johns Hopkins, the couple had discussed finding a specialist closer to their Bethesda home, assuming they no longer needed a neuro-ophthalmologist. The ride home was somber: “I was so upset I couldn’t even recount the conversation,” she said. “I had thought we were done.” Instead, they were struggling with the implications of an unexpected finding that, more than a year later, would culminate in a new diagnosis. In March 2005, when Sam was about 5 weeks old, his mother noticed that his eyes would periodically oscillate back and forth. Epstein Gluck, whose other children were then 3 and 5, called her pediatrician. © 1996-2013 The Washington Post

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18626 - Posted: 09.10.2013

By Neuroskeptic An intriguing new paper in the Journal of Neuroscience introduces a new optical illusion – and, potentially, a new way to see ones own brain activity. The article is called The Flickering Wheel Illusion: When α Rhythms Make a Static Wheel Flicker by Sokoliuk and VanRullen. Here’s the illusion: It’s a simple black and white “wheel” with 32 spokes. To see the illusion, get the wheel in your peripheral vision. Look around the edge of your screen and maybe a bit beyond – you should find a ‘sweet spot’ at which the center of the wheel starts to ‘flicker’ on and off like a strobe light. Remarkably, it even works as an afterimage. Find a ‘sweet spot’, stare at that spot for a minute, then look at a blank white wall. You should briefly see a (color-reversed) image of the wheel and it flickers like the real one (I can confirm it works for me). By itself, this is just a cool illusion. There are lots of those around. What makes it neuroscientifically interesting is that – according to Sokoliuk and VanRullen – that flickering reflects brain alpha waves. First some background. Alpha (α) waves are rhythmical electrical fields generated in the brain. They cycle with a frequency of about 10 Hz (ten times per second) and are strongest when you have your eyes closed, but are still present whenever you’re awake. When Hans Berger invented the electroencephalograph (EEG) and hooked it up to the first subjects in 1924, these waves were the first thing he noticed – hence, “alpha”. They’re noticable because they’re both strong and consistent. They’re buzzing through your brain right now. But this raises a mystery – why don’t we see them? Alpha waves are generated by rhythmical changes in neuronal activity, mainly centered on the occipital cortex. Occipital activity is what makes us see things. So why don’t we see something roughly 10 times every second?

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18543 - Posted: 08.22.2013

The EnChroma Color Blindness Test measures the type and extent of color vision deficiency. The test takes between 2-5 minutes to complete. Your test results may be anonymously recorded on our server for quality assurance purposes. This test is not a medical diagnosis. Please consult an eye care professional for more information regarding color vision deficiency. Copyright 2013 EnChroma, Inc.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18525 - Posted: 08.19.2013

A few weeks back, I wrote about special lenses that were developed to give doctors “a clearer view of veins and vasculature, bruising, cyanosis, pallor, rashes, erythema, and other variations in blood O2 level, and concentration,” especially in bright light. But these lenses turned out to have an unintended side effect: they “may cure red-green colorblindness.” I’m severely red-green colorblind, so I was eager to try these $300 lenses. Turns out they didn’t help me; the company said that my colorblindness is too severe. They have helped many others, though (their Amazon reviews makes that clear). After my column appeared, I heard from another company that makes color-enhancing glasses — this time, specifically for red-green colorblind folks. The company’s called EnChroma, and the EnChroma Cx sunglasses are a heartbeat-skipping $600 a pair. “Our lenses are specifically designed to address color blindness,” the company wrote to me, “and utilize a 100+ layer dielectric coating we engineered for this precise purpose by keeping the physiology of the eyes of colorblind people in mind.” I asked to try out a pair. (You can, too: there’s a 30-day money-back guarantee.) To begin, you figure out which kind of colorblindness you have — Protan or Deutan — by taking the test at enchroma.com. Turns out I have something called Strong Protan. (“Protanomaly is a type of red-green color vision deficiency related to a genetic anomaly of the L-cone (i.e. the red cone).”) I’d never heard of it, but whatever. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18524 - Posted: 08.19.2013

by Sara Reardon It's a case of hear no object, see no object. Hearing the name of an object appears to influence whether or not we see it, suggesting that hearing and vision might be even more intertwined than previously thought. Studies of how the brain files away concepts suggest that words and images are tightly coupled. What is not clear, says Gary Lupyan of the University of Wisconsin in Madison, is whether language and vision work together to help you interpret what you're seeing, or whether words can actually change what you see. Lupyan and Emily Ward of Yale University used a technique called continuous flash suppression (CFS) on 20 volunteers to test whether a spoken prompt could make them detect an image that they were not consciously aware they were seeing. CFS works by displaying different images to the right and left eyes: one eye might be shown a simple shape or an animal, for example, while the other is shown visual "noise" in the form of bright, randomly flickering shapes. The noise monopolises the brain, leaving so little processing power for the other image that the person does not consciously register it, making it effectively invisible. Wheels of perception In a series of CFS experiments, the researchers asked volunteers whether or not they could see a specific object, such as a dog. Sometimes it was displayed, sometimes not. When it was not displayed or when the image was of another animal such as a zebra or kangaroo, the volunteers typically reported seeing nothing. But when a dog was displayed and the question mentioned a dog, the volunteers were significantly more likely to become aware of it. "If you hear a word, that greases the wheels of perception," says Lupyan: the visual system becomes primed for anything to do with dogs. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Language and Our Divided Brain
Link ID: 18506 - Posted: 08.14.2013

By Melinda Wenner Moyer Our world is determined by the limits of our five senses. We can't hear pitches that are too high or low, nor can we see ultraviolet or infrared light—even though these phenomena are not fundamentally different from the sounds and sights that our ears and eyes can detect. But what if it were possible to widen our sensory boundaries beyond the physical limitations of our anatomy? In a study published recently in Nature Communications, scientists used brain implants to teach rats to “see” infrared light, which they usually find invisible. The implications are tremendous: if the brain is so flexible it can learn to process novel sensory signals, people could one day feel touch through prosthetic limbs, see heat via infrared light or even develop a sixth sense for magnetic north. Miguel Nicolelis, a neurobiologist at Duke University, and his colleagues trained six rats to poke their nose inside a port when the LED light above it lit up. Then the researchers surgically attached infrared cameras to the rats' head and wired the cameras to electrodes they implanted into the rats' primary somatosensory cortex, a brain region responsible for sensory processing. When the camera detected infrared light, it stimulated the animals' whisker neurons. The stimulation became stronger the closer the rats got to the infrared light or the more they turned their head toward it, just as brain activation responds to light seen by the eyes. Then the scientists let the animals loose in their chambers, this time using infrared light instead of LEDs to signal the ports the rats should visit. At first, none of the rats used the infrared signals. But after about 26 days of practice, all six had learned how to use the once invisible light to find the right ports. © 2013 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 5: The Sensorimotor System
Link ID: 18494 - Posted: 08.13.2013

Here's what the Swedish artist Oscar Reutersvard did. In 1934, he got himself a pen and paper and drew cubes, like this. He called this final version "Impossible Triangle of Opus 1 No. 293aa." I don't know what the "293aa" is about, but he was right about "impossible." An arrangement like this cannot take place in the physical universe as we know it. You follow the bottom row along with your eyes, then add another row, but when the third row pops in, where are you? Nowhere you have ever been before. At some step in the process you've been tricked, but it's very, very hard to say where the trick is, because what's happening is your brain wants to see all these boxes as units of a single triangle and while the parts simply won't gel, your brain insists on seeing them as a whole. It's YOU who's playing the trick, and you can't un-be you. So you are your own prisoner. At first, this feels like a neurological trap, like a lie you can't not believe. But when you think about for a bit, it's the opposite, it's a release. Twenty years later, the mathematician/physicist Roger Penrose (and his dad, psychologist Lionel Penrose) did it again. They hadn't seen Reutersvard's triangle. Theirs was drawn in perspective, which makes it even more challenging. Here's my version of their Penrose Triangle. What's cool about this? I'm going to paraphrase science writer John D. Barrow, in several places: We know that these drawings can't exist in the physical world. Even as we look at them, particularly when we look at them, we know they are impossible. And yet, we can imagine them anyway. Our brains, it turns out, are not prisoners of the world we live in; we can fly free! We can, any time we like, create the impossible. ©2013 NPR

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 14: Attention and Consciousness
Link ID: 18484 - Posted: 08.10.2013

Richard Johnston Scientists have mapped the dense interconnections and neuronal activity of mouse and fruitfly visual networks. The research teams, whose work is published in three separate studies today in Nature1–3, also created three-dimensional (3D) reconstructions, shown in the video above. All three studies interrogate parts of the central nervous system located in the eyes. In one, Moritz Helmstaedter, a neurobiologist at the Max Planck Institute of Neurobiology in Martinsried, Germany, and his collaborators created a complete 3D map of a 950-cell section of a mouse retina, including the interconnections among those neuronal cells. To do so, the team tapped into the help of more than 200 students, who collectively spent more than 20,000 hours processing the images1. The two other studies investigated how the retinas of the fruitfly (Drosophila melanogaster) detect motion. Shin-ya Takemura, a neuroscientist at the Howard Hughes Medical Institute in Ashburn, Virginia, and his collaborators mapped four neuronal circuits associated with motion perception and found that each is wired for detecting motion in a particular direction — up, down, left or right2. In the third study, Matthew Maisak, a computational biologist at the Max Planck Institute of Neurobiology, and his colleagues mapped the same four cellular networks and tagged the cells of each with protein markers that fluoresce in red, green, blue or yellow in response to stimulation with light3. © 2013 Nature Publishing Group,

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18479 - Posted: 08.08.2013

Researchers have achieved dynamic, atomic-scale views of a protein needed to maintain the transparency of the lens in the human eye. The work, funded in part by the National Institutes of Health, could lead to new insights and drugs for treating cataract and a variety of other health conditions. Aquaporin proteins form water channels between cells and are found in many tissues, but aquaporin zero (AQP0) is found only in the mammalian lens, which focuses light onto the retina, at the back of the eye. The lens is primarily made up of unique cells called lens fibers that contain little else besides water and proteins called crystallins. Tight packing of these fibers and of the crystallin proteins within them helps create a uniform medium that allows light to pass through the lens, almost as if it were glass. Abnormal development or age-related changes in the lens can lead to cataract — a clouding of the lens that causes vision loss. Besides age, other risk factors for cataract include smoking, diabetes, and genetic factors. Mutations in the AQP0 gene can cause congenital cataract and may increase the risk of age-related cataract. “The AQP0 channel is believed to play a vital role in maintaining the transparency of the lens and in regulating water volume in the lens fibers, so understanding the molecular details of how water flows through the channel could lead to a better understanding of cataract,” said Dr. Houmam Araj, who oversees programs on lens, cataract and oculomotor systems at NIH’s National Eye Institute (NEI), which helped fund the research.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18465 - Posted: 08.06.2013

By Susana Martinez-Conde and Stephen L. Macknik According to a legend that one of us (Martinez-Conde) heard growing up in Spain, anybody can see the Devil's face. All you need to do is to stare at your own face in the mirror at the stroke of midnight, call the Devil's name and the Prince of Darkness will look back at you. Needless to say, I was both fascinated and terrified by the possibility. And I knew this was an experiment I must try. I waited a day or two to gather my courage, then stayed awake until midnight, got up from my bed, and into the bathroom I went. I closed the door behind me so that my family would not hear me calling out loud for Satan, faced my wide-eyed reflection, made my invocation, and ... nothing happened. I was disenchanted (literally) but also quite relieved. Now, three decades later, a paper entitled “Strange-Face-in-the-Mirror Illusion,” by vision scientist Giovanni B. Caputo of the University of Urbino in Italy, may explain my lack of results. Caputo asked 50 subjects to gaze at their reflected faces in a mirror for a 10-minute session. After less than a minute, most observers began to perceive the “strange-face illusion.” The participants' descriptions included huge deformations of their own faces; seeing the faces of alive or deceased parents; archetypal faces such as an old woman, child or the portrait of an ancestor; animal faces such as a cat, pig or lion; and even fantastical and monstrous beings. All 50 participants reported feelings of “otherness” when confronted with a face that seemed suddenly unfamiliar. Some felt powerful emotions. After reading Caputo's article, I had to give “Satan” another try. I suspected that my failure to see anything other than my petrified self in the mirror 30 years ago had to do with suboptimal lighting conditions for the strange-face illusion to take place. © 2013 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18441 - Posted: 08.01.2013

If you look directly at the "spinning" ball in this illusion by Arthur Shapiro, it appears to fall straight down. But if you look to one side, the ball appears to curve to one side. The ball appears to swerve because our peripheral vision system cannot process all of its features independently. Instead, our brains combine the downward motion of the ball and its leftward spin to create the impression of a curve. Line-of-sight (or foveal) vision, on the other hand, can extract all the information from the ball's movement, which is why the curve disappears when you view the ball dead-on.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18419 - Posted: 07.29.2013

Steve Connor Author Biography The prospect of restoring the sight of blind people with stem-cell transplants has come a step closer with a study showing that it is possible to grow the light-sensitive cells of the eye in a dish with the help of an artificial retina, scientists said. For the first time, researchers have not only grown the photoreceptors of the eye in the laboratory from stem cells but transplanted them into eyes of blind mice where the cells have become fully integrated into the complex retinal tissue. So far the scientists have been unable to show any improvement in the vision of the blind mice – but they are confident that this will soon be possible in further experiments, which should enable them to move to the first clinical trials on patients within five years. Professor Robin Ali of University College London, who led the research at the Institute of Ophthalmology and Moorfields Eye Hospital, said that the technique could lead to stem cell transplants for improving the vision of thousands of people with degenerative eye disorders caused by the progressive loss of photosensitive cells. “The breakthrough here is that we’ve demonstrated we can transplant photoreceptors derived from embryonic stem cells into adult mice. It paves the way to a human clinical trial because now we have a clear route map of how to do it,” Professor Ali said. The loss of photosensitive cells, the rods and cones of the retina, is a leading cause of sight loss in a number of degenerative eye diseases, such as age-related macular degeneration, retinitis pigmentosa and diabetes-related blindness. © independent.co.uk

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18398 - Posted: 07.23.2013

The idea that dogs only see the world in black, white and shades of gray is a common misconception. What’s true, though, is that like most mammals, dogs only have two types of color receptors (commonly called “cones”) in their eyes, unlike humans, who have three. Each of these cones is sensitive to a different wavelength (i.e. color) of light. By detecting different quantities of each wavelength and combining them, our three cones can transmit various signals for all the hues of the color wheel, the same way the three primary colors can be mixed in different amounts to do the same. But because they only have two cones, dogs’ ability to see color is indeed quite limited compared to ours (a rough comparison would be the vision of humans with red-green colorblindness, since they, too, only have two cones). Whereas a human with full color vision sees red, orange, yellow, green, blue and violet along the spectrum of visible light, a dog sees grayish brown, dark yellow, light yellow, grayish yellow, light blue and dark blue, respectively—essentially, different combinations of the same two colors, yellow and blue: Consequently, researchers have long believed that dogs seldom rely on colors to discriminate between objects, instead looking solely at items’ darkness or brightness to do so. But a new experiment indicates that this idea, too, is a misconception. As described in a paper published yesterday in the Proceedings of the Royal Society B, a team of Russian researchers recently found that, at least among a small group of eight dogs, the animals were much more likely to recognize a piece of paper by its color than its brightness level—suggesting that your dog might be aware of some of the colors of everyday objects after all.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18385 - Posted: 07.18.2013

by Debora MacKenzie Starfish use the light-sensitive organs at the tips of their arms to form images, helping the animals find their way home if they stray from the reef. We have known about the sensors that starfish have at the ends of their arms for 200 years, but no one knew whether they are real eyes that form images or simply structures that detect changes in light intensity. We finally have an answer: they appear to act as real eyes. The discovery is another blow to creationist arguments that something as complex as a human eye could never evolve from simpler structures. The blue sea star (Linckia laevigata), which is widely sold as dried souvenirs, lives on shallow rock reefs in the Indian and Pacific oceans. It can detect light, preferring to come out at night to graze on algae. The light sensitivity has recently been found to be due to pigments called opsins, expressed in cells close to the animal's nerve net. What has not been clear, says Anders Garm at the University of Copenhagen in Denmark, is whether these cells simply tell the starfish about ambient light levels, as happens in more primitive light-sensitive animals, or whether they actually form spatial images. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18357 - Posted: 07.08.2013

Ransom Stephens - The video linked here shows how a team of UC Berkeley researchers (two neuroscientists, a bioengineer, two statisticians, and a psychologist) decoded images from brain scans of test subjects watching videos. Yes, by analyzing the scans, they reproduced the videos that the subjects watched. While the reproduced videos are hazy, the ability to reproduce images from the very thoughts of individuals is striking. Here’s how it works: fMRI (functional magnetic resonance imaging) scans light up pixels in three dimensions, 2 mm cubes called voxels. You’ve seen the images, color maps of the brain. The colors represent the volume of blood flow in each voxel. Since an fMRI scan takes about a second to record, the voxel colors represent the time-average blood flow during a given second. Three different subjects (each of whom were also authors of the paper) watched YouTube videos from within an fMRI scanner. Brain scans were taken as rapidly as possible as they watched a large number of 12 minute videos. Each video was watched one time. The resulting scans were used to “train” models. The models consisted of fits to the 3D scans and unique models were developed for each person. By fitting a subject’s model to the time-ordered series of scans and then optimizing the model over a large sample of known videos, the model translates between measured blood flow and features in the video like shapes, edges, and motion. © 2013 UBM Tech,

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 18348 - Posted: 07.04.2013

By JOHN MARKOFF JERUSALEM — Liat Negrin, an Israeli who has been visually impaired since childhood, walked into a grocery store here recently, picked up a can of vegetables and easily read its label using a simple and unobtrusive camera attached to her glasses. Ms. Negrin, who has coloboma, a birth defect that perforates a structure of the eye and afflicts about 1 in 10,000 people, is an employee at OrCam, an Israeli start-up that has developed a camera-based system intended to give the visually impaired the ability to both “read” easily and move freely. Until now reading aids for the visually impaired and the blind have been cumbersome devices that recognize text in restricted environments, or, more recently, have been software applications on smartphones that have limited capabilities. In contrast, the OrCam device is a small camera worn in the style of Google Glass, connected by a thin cable to a portable computer designed to fit in the wearer’s pocket. The system clips on to the wearer’s glasses with a small magnet and uses a bone-conduction speaker to offer clear speech as it reads aloud the words or object pointed to by the user. The system is designed to both recognize and speak “text in the wild,” a term used to describe newspaper articles as well as bus numbers, and objects as diverse as landmarks, traffic lights and the faces of friends. It currently recognizes English-language text and beginning this week will be sold through the company’s Web site for $2,500, about the cost of a midrange hearing aid. It is the only product, so far, of the privately held company, which is part of the high-tech boom in Israel. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18226 - Posted: 06.04.2013

by Andy Coghlan An experimental stem-cell treatment has restored the sight of a man blinded by the degeneration of his retinal cells. The man, who is taking part in a trial examining the safety of using human embryonic stem cells (hESCs) to reverse two common causes of blindness, can now see well enough to be allowed to drive. People undergoing treatment had reported modest improvements in vision earlier in the trial, which began in 2011, but this individual has made especially dramatic progress. The vision in his affected eye went from 20/400 – essentially blind – to 20/40, which is considered sighted. "There's a guy walking around who was blind, but now can see," says Gary Rabin, chief executive officer of Advanced Cell Technology, the company in Marlborough, Massachusetts that devised the treatment. "With that sort of vision, you can have a driver's licence." In all, the company has so far treated 22 patients who either have dry age-related macular degeneration, a common condition that leaves people with a black hole in the centre of their vision, or Stargardt's macular dystrophy, an inherited disease that leads to premature blindness. The company wouldn't tell New Scientist which of the two diseases the participant with the dramatic improvement has. In both diseases, people gradually lose retinal pigment epithelial (RPE) cells. These are essential for vision as they recycle protein and lipid debris that accumulates on the retina, and supply nutrients and energy to photoreceptors – the cells that capture light and transmit signals to the brain. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18180 - Posted: 05.21.2013

By SUSANA MARTINEZ-CONDE YOUR eyes are the sharks of the human body: they never stop moving. In the past minute alone, your eyes made as many as 240 quick movements called “saccades” (French for “jolts”). In your waking hours today, you will very likely make some 200,000 of them, give or take a few thousand. When you sleep, your eyes keep moving — though in different ways and at varying speeds, depending on the stage of sleep. A portion of our eye movements we do consciously and are at least aware of on some level: when we follow a moving bird or plane across the sky with our gaze, for instance. But most of these tiny back-and-forths and ups-and-downs — split-second moves that would make the Flying Karamazov Brothers weep with jealousy — are unconscious and nearly imperceptible to us. Our brain suppresses the feeling of our eye jumps, to avoid the sensation that the world is constantly quaking. Even when we think our gazes are petrified, in fact, we are still making eye motions, including tiny saccades called “microsaccades” — between 60 and 120 of them per minute. Just as we don’t notice most of our breathing, we are almost wholly unaware of this frenetic, nonstop ocular activity. Without it, though, we couldn’t see a thing. Humans are hardly unique in this way. Every known visual system depends on movement: we see things either because they move or because our eyes do. Some of the earliest clues to this came more than two centuries ago. Erasmus Darwin, a grandfather of Charles Darwin, observed in 1794 that staring at a small piece of scarlet silk on white paper for a long time — thereby minimizing (though not stopping) his eye movements — made it grow fainter in color, until it seemed to vanish. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18169 - Posted: 05.20.2013

by Paul Gabrielsen An insect's compound eye is an engineering marvel: high resolution, wide field of view, and incredible sensitivity to motion, all in a compact package. Now, a new digital camera provides the best-ever imitation of a bug's vision, using new optical materials and techniques. This technology could someday give patrolling surveillance drones the same exquisite vision as a dragonfly on the hunt. Human eyes and conventional cameras work about the same way. Light enters a single curved lens and resolves into an image on a retina or photosensitive chip. But a bug's eyes are covered with many individual lenses, each connected to light-detecting cells and an optic nerve. These units, called ommatidia, are essentially self-contained minieyes. Ants have a few hundred. Praying mantises have tens of thousands. The semicircular eyes sometimes take up most of an insect's head. While biologists continue to study compound eyes, materials scientists such as John Rogers try to mimic elements of their design. Many previous attempts to make compound eyes focused light from multiple lenses onto a flat chip, such as the charge-coupled device chips in digital cameras. While flat silicon chips have worked well for digital photography, in biology, "you never see that design," Rogers says. He thinks that a curved system of detectors better imitates biological eyes. In 2008, his lab created a camera designed like a mammal eye, with a concave electronic "retina" at the back. The curved surface enabled a wider field of view without the distortion typical of a wide-angle camera lens. Rogers then turned his attention to the compound eye. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 18110 - Posted: 05.02.2013

By Michelle Roberts Health editor, BBC News online Canadian doctors say they have found an inventive way to treat lazy eye - playing the Tetris video game. The McGill University team discovered the popular tile-matching puzzle could train both eyes to work together. In a small study, in Current Biology with 18 adults, it worked better than conventional patching of the good eye to make the weak one work harder. The researchers now want to test if it would be a good way to treat children with the same condition. UK studies are already under way. An estimated one in 50 children has lazy eye, known medically as amblyopia. It happens when the vision in one eye does not develop properly, and is often accompanied by a squint - where the eyes do not look in the same direction. Without treatment it can lead to a permanent loss of vision in the weak eye, which is why doctors try to intervene early. Normally, the treatment is to cover the strong eye with a patch so that the child is forced to use their lazy eye. The child must wear the patch for much of the day over many months, which can be frustrating and unpleasant. BBC © 2013

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 13: Memory, Learning, and Development
Link ID: 18062 - Posted: 04.23.2013