Chapter 10. Vision: From Eye to Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Susana Martinez-Conde and Stephen L. Macknik According to a legend that one of us (Martinez-Conde) heard growing up in Spain, anybody can see the Devil's face. All you need to do is to stare at your own face in the mirror at the stroke of midnight, call the Devil's name and the Prince of Darkness will look back at you. Needless to say, I was both fascinated and terrified by the possibility. And I knew this was an experiment I must try. I waited a day or two to gather my courage, then stayed awake until midnight, got up from my bed, and into the bathroom I went. I closed the door behind me so that my family would not hear me calling out loud for Satan, faced my wide-eyed reflection, made my invocation, and ... nothing happened. I was disenchanted (literally) but also quite relieved. Now, three decades later, a paper entitled “Strange-Face-in-the-Mirror Illusion,” by vision scientist Giovanni B. Caputo of the University of Urbino in Italy, may explain my lack of results. Caputo asked 50 subjects to gaze at their reflected faces in a mirror for a 10-minute session. After less than a minute, most observers began to perceive the “strange-face illusion.” The participants' descriptions included huge deformations of their own faces; seeing the faces of alive or deceased parents; archetypal faces such as an old woman, child or the portrait of an ancestor; animal faces such as a cat, pig or lion; and even fantastical and monstrous beings. All 50 participants reported feelings of “otherness” when confronted with a face that seemed suddenly unfamiliar. Some felt powerful emotions. After reading Caputo's article, I had to give “Satan” another try. I suspected that my failure to see anything other than my petrified self in the mirror 30 years ago had to do with suboptimal lighting conditions for the strange-face illusion to take place. © 2013 Scientific American
Link ID: 18441 - Posted: 08.01.2013
If you look directly at the "spinning" ball in this illusion by Arthur Shapiro, it appears to fall straight down. But if you look to one side, the ball appears to curve to one side. The ball appears to swerve because our peripheral vision system cannot process all of its features independently. Instead, our brains combine the downward motion of the ball and its leftward spin to create the impression of a curve. Line-of-sight (or foveal) vision, on the other hand, can extract all the information from the ball's movement, which is why the curve disappears when you view the ball dead-on.
Link ID: 18419 - Posted: 07.29.2013
Sleepless night, the moon is bright. People sleep less soundly when there's a full moon, researchers discovered when they analyzed data from a past sleep study. If you were tossing and turning and howling at your pillow this week, you’re not necessarily a lunatic, at least in the strictest sense of the word. The recent full moon might be to blame for your poor sleep. In the days close to a full moon, people take longer to doze off, sleep less deeply, and sleep for a shorter time, even if the moon isn’t shining in their window, a new study has found. “A lot of people are going to say, ‘Yeah, I knew this already. I never sleep well during a full moon.’ But this is the first data that really confirms it,” says biologist Christian Cajochen of the University of Basel in Switzerland, lead author of the new work. “There had been numerous studies before, but many were very inconclusive.” Anecdotal evidence has long suggested that people’s sleep patterns, moods, and even aggression is linked to moon cycles. But past studies of potential lunar effects have been tainted by statistical weaknesses, biases, or inconsistent methods, Cajochen says. Between 2000 and 2003, he and his colleagues had collected detailed data on the sleep patterns of 33 healthy volunteers for an unrelated study on the effects of aging on sleep. Using electroencephalograms (EEG) that measure brain activity, they recorded how deep and how long each participant’s nightly sleep was in a controlled, laboratory setting. Years after the initial experiment, the scientists were drinking in a pub—during a full moon—and came up with the idea of going back to the data to test for correlations with moon cycles. © 2012 American Association for the Advancement of Science.
By Susan Milius When a peacock fans out the iridescent splendor of his train, more than half the time the peahen he’s displaying for isn’t even looking at him. That’s the finding of the first eye-tracking study of birds. In more than 200 short clips recorded by eye-tracking cameras, four peahens spent less than one-third of the time actually looking directly at a displaying peacock, says evolutionary biologist Jessica Yorzinski of Purdue University in West Lafayette, Ind. When peahens did bother to watch the shimmering male, they mostly looked at the lower zone of his train feathers. The feathers’ upper zone of ornaments may intrigue human observers, but big eyespots there garnered less than 5 percent of the female’s time, Yorzinski and her colleagues report July 24 in the Journal of Experimental Biology. These data come from a system that coauthor Jason Babcock of Positive Science, an eye-tracking company in New York City, engineered to fit peahens. Small plastic helmets hold two cameras that send information to a backpack of equipment, which wirelessly transmits information to a computer. One infrared head camera focuses on an eye, tracking pupil movements. A second camera points ahead, giving the broad bird’s-eye view. The rig weighs about 25 grams and takes some getting used to. If a peahen with no experience of helmets gets the full rig, Yorzinski says, “she just droops her head to the ground.” Adding bits of technology gradually, however, let Yorzinski accustom peahens to walking around, and even mating, while cameraed up. © Society for Science & the Public 2000 - 2013
Steve Connor Author Biography The prospect of restoring the sight of blind people with stem-cell transplants has come a step closer with a study showing that it is possible to grow the light-sensitive cells of the eye in a dish with the help of an artificial retina, scientists said. For the first time, researchers have not only grown the photoreceptors of the eye in the laboratory from stem cells but transplanted them into eyes of blind mice where the cells have become fully integrated into the complex retinal tissue. So far the scientists have been unable to show any improvement in the vision of the blind mice – but they are confident that this will soon be possible in further experiments, which should enable them to move to the first clinical trials on patients within five years. Professor Robin Ali of University College London, who led the research at the Institute of Ophthalmology and Moorfields Eye Hospital, said that the technique could lead to stem cell transplants for improving the vision of thousands of people with degenerative eye disorders caused by the progressive loss of photosensitive cells. “The breakthrough here is that we’ve demonstrated we can transplant photoreceptors derived from embryonic stem cells into adult mice. It paves the way to a human clinical trial because now we have a clear route map of how to do it,” Professor Ali said. The loss of photosensitive cells, the rods and cones of the retina, is a leading cause of sight loss in a number of degenerative eye diseases, such as age-related macular degeneration, retinitis pigmentosa and diabetes-related blindness. © independent.co.uk
The idea that dogs only see the world in black, white and shades of gray is a common misconception. What’s true, though, is that like most mammals, dogs only have two types of color receptors (commonly called “cones”) in their eyes, unlike humans, who have three. Each of these cones is sensitive to a different wavelength (i.e. color) of light. By detecting different quantities of each wavelength and combining them, our three cones can transmit various signals for all the hues of the color wheel, the same way the three primary colors can be mixed in different amounts to do the same. But because they only have two cones, dogs’ ability to see color is indeed quite limited compared to ours (a rough comparison would be the vision of humans with red-green colorblindness, since they, too, only have two cones). Whereas a human with full color vision sees red, orange, yellow, green, blue and violet along the spectrum of visible light, a dog sees grayish brown, dark yellow, light yellow, grayish yellow, light blue and dark blue, respectively—essentially, different combinations of the same two colors, yellow and blue: Consequently, researchers have long believed that dogs seldom rely on colors to discriminate between objects, instead looking solely at items’ darkness or brightness to do so. But a new experiment indicates that this idea, too, is a misconception. As described in a paper published yesterday in the Proceedings of the Royal Society B, a team of Russian researchers recently found that, at least among a small group of eight dogs, the animals were much more likely to recognize a piece of paper by its color than its brightness level—suggesting that your dog might be aware of some of the colors of everyday objects after all.
by Debora MacKenzie Starfish use the light-sensitive organs at the tips of their arms to form images, helping the animals find their way home if they stray from the reef. We have known about the sensors that starfish have at the ends of their arms for 200 years, but no one knew whether they are real eyes that form images or simply structures that detect changes in light intensity. We finally have an answer: they appear to act as real eyes. The discovery is another blow to creationist arguments that something as complex as a human eye could never evolve from simpler structures. The blue sea star (Linckia laevigata), which is widely sold as dried souvenirs, lives on shallow rock reefs in the Indian and Pacific oceans. It can detect light, preferring to come out at night to graze on algae. The light sensitivity has recently been found to be due to pigments called opsins, expressed in cells close to the animal's nerve net. What has not been clear, says Anders Garm at the University of Copenhagen in Denmark, is whether these cells simply tell the starfish about ambient light levels, as happens in more primitive light-sensitive animals, or whether they actually form spatial images. © Copyright Reed Business Information Ltd.
Ransom Stephens - The video linked here shows how a team of UC Berkeley researchers (two neuroscientists, a bioengineer, two statisticians, and a psychologist) decoded images from brain scans of test subjects watching videos. Yes, by analyzing the scans, they reproduced the videos that the subjects watched. While the reproduced videos are hazy, the ability to reproduce images from the very thoughts of individuals is striking. Here’s how it works: fMRI (functional magnetic resonance imaging) scans light up pixels in three dimensions, 2 mm cubes called voxels. You’ve seen the images, color maps of the brain. The colors represent the volume of blood flow in each voxel. Since an fMRI scan takes about a second to record, the voxel colors represent the time-average blood flow during a given second. Three different subjects (each of whom were also authors of the paper) watched YouTube videos from within an fMRI scanner. Brain scans were taken as rapidly as possible as they watched a large number of 12 minute videos. Each video was watched one time. The resulting scans were used to “train” models. The models consisted of fits to the 3D scans and unique models were developed for each person. By fitting a subject’s model to the time-ordered series of scans and then optimizing the model over a large sample of known videos, the model translates between measured blood flow and features in the video like shapes, edges, and motion. © 2013 UBM Tech,
By Felicity Muth I recently came across an article entitled ‘Advantages in exploring a new environment with the left eye in lizards’ and I couldn’t help but read more. In this study, conducted in Italy, scientists caught 44 wall lizards and glued eye patches on to them (using a paper glue that is harmless to the lizards as they can shed and renew their skin). Half the lizards had their left eye covered, and half had their right eye covered. The lizards were then let into a maze for 20 minutes to see how they fared with turning left and right. The ones that were allowed to use just their left eye were much faster than those that could just use their right eye at turning both left and right. In addition to this, they made fewer stops, seeming to be less hesitant and indecisive than the right-eyed individuals. However, this was only the case when the lizard had to make a choice between turning left or right, not when they only had the choice to turn one way. Why might this be the case? Well, like a lot of vertebrates, lizards have lateralized brains. This means that the brain is divided in two halves, and some functions are specialized to one half. The classic example of this in humans is Broca’s area (associated with speech), which is found in the left hemisphere of the brain in 95% of us. Similar to how humans on the whole prefer to use their right hand, it seems that lizards generally prefer to use their left eye. As with humans, lizard optic nerve fibres are crossed over, meaning that control of the left eye comes from the right hemisphere of the brain and vice versa. As these lizards predominantly use their left eye, this indicates that in this species, something in the right side of their brain is specialised in attending to spatial cues. © 2013 Scientific American
By Sandra G. Boodman, Through repeated painful experience, Shannon Bream had learned to keep her eyedrops close at hand wherever she went — even in the shower. Although they did little to quell the near-constant thrum of pain, the lubricating drops were better than nothing. She clutched the bottle while working out at the gym and kept extras in her purse, car and desk. At night, she set her alarm clock to ring every few hours so she could use them; failing to do so, she had discovered, meant waking up in pain that felt “like someone was stabbing me in the eye,” she said. “Daytime was okay, I could function, but nights had become an absolute nightmare,” said Bream, who covers the Supreme Court for Fox News. But a doctor’s suggestion that she was exaggerating her worsening misery, coupled with the bleak future presented on the Internet message boards she trolled night after night searching for help, plunged her into despair. “I didn’t think I could live like this for another 40 years,” she recalled thinking during her 18-month ordeal. Ironically, it was those same message boards that helped steer Bream to the doctor who provided a correct diagnosis and a satisfactory resolution. In the middle of one night in February 2010, Bream, then 39, awoke suddenly with pain in her left eye “so searing it sat me straight up in bed.” She stumbled to the bathroom, where she frantically rummaged through the medicine cabinet and grabbed various eyedrops, hoping to dull the pain. Her eye was tearing profusely; after about three hours, both the pain and tearing subsided. © 1996-2013 The Washington Post
By JOHN MARKOFF JERUSALEM — Liat Negrin, an Israeli who has been visually impaired since childhood, walked into a grocery store here recently, picked up a can of vegetables and easily read its label using a simple and unobtrusive camera attached to her glasses. Ms. Negrin, who has coloboma, a birth defect that perforates a structure of the eye and afflicts about 1 in 10,000 people, is an employee at OrCam, an Israeli start-up that has developed a camera-based system intended to give the visually impaired the ability to both “read” easily and move freely. Until now reading aids for the visually impaired and the blind have been cumbersome devices that recognize text in restricted environments, or, more recently, have been software applications on smartphones that have limited capabilities. In contrast, the OrCam device is a small camera worn in the style of Google Glass, connected by a thin cable to a portable computer designed to fit in the wearer’s pocket. The system clips on to the wearer’s glasses with a small magnet and uses a bone-conduction speaker to offer clear speech as it reads aloud the words or object pointed to by the user. The system is designed to both recognize and speak “text in the wild,” a term used to describe newspaper articles as well as bus numbers, and objects as diverse as landmarks, traffic lights and the faces of friends. It currently recognizes English-language text and beginning this week will be sold through the company’s Web site for $2,500, about the cost of a midrange hearing aid. It is the only product, so far, of the privately held company, which is part of the high-tech boom in Israel. © 2013 The New York Times Company
by Andy Coghlan An experimental stem-cell treatment has restored the sight of a man blinded by the degeneration of his retinal cells. The man, who is taking part in a trial examining the safety of using human embryonic stem cells (hESCs) to reverse two common causes of blindness, can now see well enough to be allowed to drive. People undergoing treatment had reported modest improvements in vision earlier in the trial, which began in 2011, but this individual has made especially dramatic progress. The vision in his affected eye went from 20/400 – essentially blind – to 20/40, which is considered sighted. "There's a guy walking around who was blind, but now can see," says Gary Rabin, chief executive officer of Advanced Cell Technology, the company in Marlborough, Massachusetts that devised the treatment. "With that sort of vision, you can have a driver's licence." In all, the company has so far treated 22 patients who either have dry age-related macular degeneration, a common condition that leaves people with a black hole in the centre of their vision, or Stargardt's macular dystrophy, an inherited disease that leads to premature blindness. The company wouldn't tell New Scientist which of the two diseases the participant with the dramatic improvement has. In both diseases, people gradually lose retinal pigment epithelial (RPE) cells. These are essential for vision as they recycle protein and lipid debris that accumulates on the retina, and supply nutrients and energy to photoreceptors – the cells that capture light and transmit signals to the brain. © Copyright Reed Business Information Ltd.
By SUSANA MARTINEZ-CONDE YOUR eyes are the sharks of the human body: they never stop moving. In the past minute alone, your eyes made as many as 240 quick movements called “saccades” (French for “jolts”). In your waking hours today, you will very likely make some 200,000 of them, give or take a few thousand. When you sleep, your eyes keep moving — though in different ways and at varying speeds, depending on the stage of sleep. A portion of our eye movements we do consciously and are at least aware of on some level: when we follow a moving bird or plane across the sky with our gaze, for instance. But most of these tiny back-and-forths and ups-and-downs — split-second moves that would make the Flying Karamazov Brothers weep with jealousy — are unconscious and nearly imperceptible to us. Our brain suppresses the feeling of our eye jumps, to avoid the sensation that the world is constantly quaking. Even when we think our gazes are petrified, in fact, we are still making eye motions, including tiny saccades called “microsaccades” — between 60 and 120 of them per minute. Just as we don’t notice most of our breathing, we are almost wholly unaware of this frenetic, nonstop ocular activity. Without it, though, we couldn’t see a thing. Humans are hardly unique in this way. Every known visual system depends on movement: we see things either because they move or because our eyes do. Some of the earliest clues to this came more than two centuries ago. Erasmus Darwin, a grandfather of Charles Darwin, observed in 1794 that staring at a small piece of scarlet silk on white paper for a long time — thereby minimizing (though not stopping) his eye movements — made it grow fainter in color, until it seemed to vanish. © 2013 The New York Times Company
Link ID: 18169 - Posted: 05.20.2013
by Paul Gabrielsen An insect's compound eye is an engineering marvel: high resolution, wide field of view, and incredible sensitivity to motion, all in a compact package. Now, a new digital camera provides the best-ever imitation of a bug's vision, using new optical materials and techniques. This technology could someday give patrolling surveillance drones the same exquisite vision as a dragonfly on the hunt. Human eyes and conventional cameras work about the same way. Light enters a single curved lens and resolves into an image on a retina or photosensitive chip. But a bug's eyes are covered with many individual lenses, each connected to light-detecting cells and an optic nerve. These units, called ommatidia, are essentially self-contained minieyes. Ants have a few hundred. Praying mantises have tens of thousands. The semicircular eyes sometimes take up most of an insect's head. While biologists continue to study compound eyes, materials scientists such as John Rogers try to mimic elements of their design. Many previous attempts to make compound eyes focused light from multiple lenses onto a flat chip, such as the charge-coupled device chips in digital cameras. While flat silicon chips have worked well for digital photography, in biology, "you never see that design," Rogers says. He thinks that a curved system of detectors better imitates biological eyes. In 2008, his lab created a camera designed like a mammal eye, with a concave electronic "retina" at the back. The curved surface enabled a wider field of view without the distortion typical of a wide-angle camera lens. Rogers then turned his attention to the compound eye. © 2010 American Association for the Advancement of Science.
Link ID: 18110 - Posted: 05.02.2013
By Michelle Roberts Health editor, BBC News online Canadian doctors say they have found an inventive way to treat lazy eye - playing the Tetris video game. The McGill University team discovered the popular tile-matching puzzle could train both eyes to work together. In a small study, in Current Biology with 18 adults, it worked better than conventional patching of the good eye to make the weak one work harder. The researchers now want to test if it would be a good way to treat children with the same condition. UK studies are already under way. An estimated one in 50 children has lazy eye, known medically as amblyopia. It happens when the vision in one eye does not develop properly, and is often accompanied by a squint - where the eyes do not look in the same direction. Without treatment it can lead to a permanent loss of vision in the weak eye, which is why doctors try to intervene early. Normally, the treatment is to cover the strong eye with a patch so that the child is forced to use their lazy eye. The child must wear the patch for much of the day over many months, which can be frustrating and unpleasant. BBC © 2013
By Breanna Draxler When you lose something important—a child, your wallet, the keys—your brain kicks into overdrive to find the missing object. But that’s not just a matter of extra concentration. Researchers have found that in these intense search situations your brain actually rallies extra visual processing troops (and even some other non-visual parts of the brain) to get the job done. It has to do with the way your brain processes images in the first place. When you see objects, your brain sorts them into broad categories—about 1,000 of them. The various elements we perceive trigger a pattern of different categorical areas in our brains. For example, if you see a woman carrying an umbrella while walking her dog in the park, your brain might catalog it as “people,” “tools” and “animals.” But when you lose something, your brain reacts a little differently. It expands the category of the object you’re looking for to include related categories and turns down the perception of other, non-related categories, to allow you to focus more intently on the object of interest. To see what this altered categorization looked like during a search, researchers at UC Berkeley used functional magnetic resonance imaging (fMRI) to record changes in five people’s brain activity as they looked for objects in movies. The objects they sought were categorized broadly, paralleling how our brains separate items into generalized groups like “vehicles” and “people.” During hour-long search sessions, the researchers found that regardless of whether the participants found the objects they were looking for, their brains cast a wider visual net than they would if they were watching passively.
By ERIC R. KANDEL THIS month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms. Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain. This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience. Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others. The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth. As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes. © 2013 The New York Times Company
By James Gallagher Health and science reporter, BBC News Eye drops designed to lower cholesterol may be able to prevent one of the most common forms of blindness, according to US researchers. They showed how high cholesterol levels could affect the immune system and lead to macular degeneration. Tests on mice and humans, published in the journal Cell Metabolism, showed that immune cells became destructive when they were clogged with fats. Others cautioned that the research was still at an early stage. The macula is the sweet spot in the eye which is responsible for fine detail. It is essential for reading, driving and recognising people's faces. Macular degeneration is more common in old age. It starts in a "dry" form in which the light-sensing cells in the eye become damaged, but can progress into the far more threatening "wet" version, when newly formed blood vessels can rapidly cause blindness. Doctors at the Washington University School of Medicine investigated the role of macrophages, a part of the immune system, in the transition from the dry to the wet form of the disease. One of the researchers, Dr Rajendra Apte, said the role of macrophages changed and they triggered the production of new blood vessels. "Instead of being protective, they accelerate the disease, but we didn't understand why they switched to become the bad cells," he told the BBC. Normally the cells can "eat" fatty deposits and send them back into the blood. However, their research showed that older macrophages struggle. They could still eat the fats, but they could not expel them. So they became "bloated", causing inflammation which in turn led to the creation of new blood vessels. BBC © 2013
Link ID: 17985 - Posted: 04.03.2013
By DOUGLAS QUENQUA A new study suggests that primates’ ability to see in three colors may not have evolved as a result of daytime living, as has long been thought. The findings, published in the journal Proceedings of the Royal Society B, are based on a genetic examination of tarsiers, the nocturnal, saucer-eyed primates that long ago branched off from monkeys, apes and humans. By analyzing the genes that encode photopigments in the eyes of modern tarsiers, the researchers concluded that the last ancestor that all tarsiers had in common had highly acute three-color vision, much like that of modern-day primates. Such vision would normally indicate a daytime lifestyle. But fossils show that the tarsier ancestor was also nocturnal, strongly suggesting that the ability to see in three colors somehow predated the shift to daytime living. The coexistence of the two normally incompatible traits suggests that primates were able to function during twilight or bright moonlight for a time before making the transition to a fully diurnal existence. “Today there is no mammal we know of that has trichromatic vision that lives during night,” said an author of the study, Nathaniel J. Dominy, associate professor of anthropology at Dartmouth. “And if there’s a pattern that exists today, the safest thing to do is assume the same pattern existed in the past. “We think that tarsiers may have been active under relatively bright light conditions at dark times of the day,” he added. “Very bright moonlight is bright enough for your cones to operate.” © 2013 The New York Times Company
By C. CLAIBORNE RAY Q. Can cataracts grow back after they have been removed? A. “Once a cataract is removed, it cannot grow back,” said Dr. Jessica B. Ciralsky, an ophthalmologist at NewYork-Presbyterian Hospital/Weill Cornell Medical Center. Blurred vision may develop after cataract surgery, mimicking the symptoms of the original cataract. This is not a recurrence of the cataract and is from a condition that is easily treated, said Dr. Ciralsky, who is a cornea and cataract specialist. Cataracts, which affect about 22 million Americans over 40, are a clouding of the eye’s naturally clear crystalline lens. Besides blurred vision, the symptoms include glare and difficulty driving at night. In cataract surgery, the entire cataract is removed and an artificial lens is implanted in its place; the capsule that held the cataract is left intact to provide support for the new lens. After surgery, patients may develop a condition called posterior capsular opacification, which is often referred to as a secondary cataract. “This is a misnomer,” Dr. Ciralsky said. “The cataract has not actually grown back.” Instead, she explained, in about 20 percent of patients, the capsule that once supported the cataract has become cloudy, or opacified. A simple laser procedure done in the office can treat the problem effectively. © 2013 The New York Times Company
Link ID: 17979 - Posted: 04.02.2013