Chapter 10. Vision: From Eye to Brain

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1138

By Dwayne Godwin, Jorge Cham The brain processes a wealth of visual information in parallel so that we perceive the world around us in the blink of an eye Dwayne Godwin is a neuroscientist at the Wake Forest University School of Medicine. Jorge Cham draws the comic strip Piled Higher and Deeper at www.phdcomics.com. © 2016 Scientific American

Keyword: ADHD
Link ID: 22689 - Posted: 09.24.2016

Alva Noë Eaters and cooks know that flavor, in the jargon of neuroscientists, is multi-modal. Taste is all important, to be sure. But so is the look of food and its feel in the mouth — not to mention its odor and the noisy crunch, or juicy squelch, that it may or may not make as we bite into it. The perception of flavor demands that we exercise a suite of not only gustatory, but also visual, olfactory, tactile and auditory sensitivities. Neuroscientists are now beginning to grasp some of the ways the brain enables our impressive perceptual power when it comes to food. Traditionally, scientists represent the brain's sensory function in a map where distinct cortical areas are thought of as serving the different senses. But it is increasingly appreciated that brain activity can't quite be segregated in this way. Cells in visual cortex may be activated by tactile stimuli. This is the case, for example, when Braille readers use their fingers to read. These blind readers aren't seeing with their fingers, rather, they are deploying their visual brains to perceive with their hands. And, in a famous series of studies that had a great influence on my thinking on these matters, Miriganka Sur at MIT showed that animals whose retinas were re-wired surgically to feed directly into auditory cortex do not hear lights and other visible objects presented to the eyes, rather, they see with their auditory brains. The brain is plastic, and different sensory modalities compete continuously for control over populations of cells. An exciting new paper on the gustatory cortex from the laboratory of Alfredo Fontanini at Stony Brook University shows that there are visual-, auditory-, olfactory- and touch-sensitive cells in the gustatory cortex of rats. There are even some cells that respond to stimuli in more than one modality. But what is more remarkable is that when rats learn to associate non-taste qualities — tones, flashes of lights, etc. — with food (sucrose in their study), there is a marked transformation in the gustatory cortex. © 2016 npr

Keyword: Chemical Senses (Smell & Taste); Vision
Link ID: 22675 - Posted: 09.21.2016

By Colin Barras Subtract 8 from 52. Did you see the calculation in your head? While a leading theory suggests our visual experiences are linked to our understanding of numbers, a study of people who have been blind from birth suggests the opposite. The link between vision and number processing is strong. Sighted people can estimate the number of people in a crowd just by looking, for instance, while children who can mentally rotate an object and correctly imagine how it might look from a different angle often develop better mathematical skills. “It’s actually hard to think of a situation when you might process numbers through any modality other than vision,” says Shipra Kanjlia at Johns Hopkins University in Baltimore, Maryland. But blind people can do maths too. To understand how they might compensate for their lack of visual experience, Kanjlia and her colleagues asked 36 volunteers – 17 of whom had been blind at birth – to do simple mental arithmetic inside an fMRI scanner. To level the playing field, the sighted participants wore blindfolds. We know that a region of the brain called the intraparietal sulcus (IPS) is, and brain scans revealed that the same area is similarly active in blind people too. “It’s really surprising,” says Kanjlia. “It turns out brain activity is remarkably similar, at least in terms of classic number processing.” This may mean we have a deep understanding of how to handle numbers that is entirely independent of visual experience. This suggests we are all born with a natural understanding of numbers – an idea many researchers find difficult to accept. © Copyright Reed Business Information Ltd.

Keyword: Vision; Attention
Link ID: 22664 - Posted: 09.17.2016

Tina Hesman Saey Color vision may actually work like a colorized version of a black-and-white movie, a new study suggests. Cone cells, which sense red, green or blue light, detect white more often than colors, researchers report September 14 in Science Advances. The textbook-rewriting discovery could change scientists’ thinking about how color vision works. For decades, researchers have known that three types of cone cells in the retina are responsible for color vision. Those cone cells were thought to send “red,” “green” and “blue” signals to the brain. The brain supposedly combines the colors, much the way a color printer does, to create a rainbow-hued picture of the world (including black and white). But the new findings indicate that “the retina is doing more of the work, and it’s doing it in a more simpleminded way,” says Jay Neitz, a color vision scientist at the University of Washington in Seattle who was not involved in the study. Red and green cone cells each come in two types: One type signals “white”; another signals color, vision researcher Ramkumar Sabesan and colleagues at the University of California, Berkeley, discovered. The large number of cells that detect white (and black — the absence of white) create a high-resolution black-and-white picture of a person’s surroundings, picking out edges and fine details. Red- and green-signaling cells fill in low-resolution color information. The process works much like filling in a coloring book or adding color to a black-and-white film, says Sabesan, who is now at the University of Washington. |© Society for Science & the Public 2000 - 2016

Keyword: Vision
Link ID: 22660 - Posted: 09.15.2016

By Rachel Becker Optical illusions have a way of breaking the internet, and the latest visual trick looks like it’s well on its way. On Sunday afternoon, game developer Will Kerslake tweeted a picture of intersecting gray lines on a white background. Twelve black dots blink in and out of existence where the gray lines meet. In the six hours since he posted the photo to Twitter, it’s been shared more than 6,000 times, with commenters demanding to know why they can’t see all 12 dots at the same time. The optical illusion was first posted to Facebook about a day ago by Japanese psychology professor Akiyoshi Kitaoka, and it has been shared more than 4,600 times so far. But the origin of this bit of visual trickery is a scientific paper published in the journal Perception in 2000. To be clear, there really are 12 black dots in the image. But (most) people can’t see all 12 dots at the same time, which is driving people nuts. "They think, 'It’s an existential crisis,'" says Derek Arnold, a vision scientist at the University of Queensland in Australia. "'How can I ever know what the truth is?'" But, he adds, scientists who study the visual system know that perception doesn’t always equal reality. In this optical illusion, the black dot in the center of your vision should always appear. But the black dots around it seem to appear and disappear. That’s because humans have pretty bad peripheral vision. If you focus on a word in the center of this line you’ll probably see it clearly. But if you try to read the words at either end without moving your eyes, they most likely look blurry. As a result, the brain has to make its best guess about what’s most likely to be going on in the fuzzy periphery — and fill in the mental image accordingly. © 2016 Vox Media, Inc.

Keyword: Vision
Link ID: 22652 - Posted: 09.15.2016

Chris Chambers One of the most compelling impressions in everyday life is that wherever we look, we “see” everything that is happening in front of us – much like a camera. But this impression is deceiving. In reality our senses are bombarded by continual waves of stimuli, triggering an avalanche of sensations that far exceed the brain’s capacity. To make sense of the world, the brain needs to determine which sensations are the most important for our current goals, focusing resources on the ones that matter and throwing away the rest. These computations are astonishingly complex, and what makes attention even more remarkable is just how effortless it is. The mammalian attention system is perhaps the most efficient and precisely tuned junk filter we know of, refined through millions of years of annoying siblings (and some evolution). Attention is amazing but no system is ever perfect. Our brain’s computational reserves are large but not infinite, and under the right conditions we can “break it” and peek behind the curtain. This isn’t just a fun trick – understanding these limits can yield important insights into psychology and neurobiology, helping us to diagnose and treat impairments that follow brain injury and disease. Thanks to over a hundred years of psychology research, it’s relatively easy to reveal attention in action. One way is through the phenomenon of change blindness. Try it yourself by following the instructions in the short video below (no sound). When we think of the term “blindness” we tend to assume a loss of vision caused by damage to the eye or optic nerves. But as you saw in the video, change blindness is completely normal and is caused by maxing out your attentional capacity. © 2016 Guardian News and Media Limited

Keyword: Attention; Vision
Link ID: 22633 - Posted: 09.06.2016

A new study by investigators at Brigham and Women's Hospital in collaboration with researchers at the University of York and Leeds in the UK and MD Andersen Cancer Center in Texas puts to the test anecdotes about experienced radiologists' ability to sense when a mammogram is abnormal. In a paper published August 29 in the Proceedings of the National Academy of Sciences, visual attention researchers showed radiologists mammograms for half a second and found that they could identify abnormal mammograms at better than chance levels. They further tested this ability through a series of experiments to explore what signal may alert radiologists to the presence of a possible abnormality, in the hopes of using these insights to improve breast cancer screening and early detection. "Radiologists can have 'hunches' after a first look at a mammogram. We found that these hunches are based on something real in the images. It's really striking that in the blink of an eye, an expert can pick up on something about that mammogram that indicates abnormality," said Jeremy Wolfe, PhD, senior author of the study and director of the Visual Attention Laboratory at BWH. "Not only that, but they can detect something abnormal in the other breast, the breast that does not contain a lesion." In the clinic, radiologists carefully evaluate mammograms and may use computer automated systems to help screen the images. Although they would never assess an image in half a second in the clinic, the ability of experts to extract the "gist" of an image quickly suggests that there may be a detectable signs of breast cancer that radiologists are rapidly picking up. Copyright 2016 ScienceDaily

Keyword: Attention; Vision
Link ID: 22627 - Posted: 09.05.2016

Laura Sanders Despite its name, the newly identified GluMI cell (pronounced “gloomy”) is no downer. It’s a nerve cell, spied in a mouse retina, that looks like one type of cell but behaves like another. Like neighboring retina nerve cells that subdue, or deaden, activity of other nerve cells, GluMI cells have a single arm extending from their body. But unlike those cells, GluMI cells actually seem to ramp up activity of nearby cells in a way that could aid vision. GLuMIs don’t seem to detect light firsthand, but they respond to it, Luca Della Santina of the University of Washington in Seattle and colleagues found. GluMIs are among a growing list of unexpected and mysterious cells found in the retinas of vertebrates, the researchers write August 8 in Current Biology. Citations L. Della Santina et al. Glutamatergic monopolar interneurons provide a novel pathway of excitation in the mouse retina. Current Biology. Vol. 26, August 8, 2016. doi:10.1016/j.cub.2016.06.016. |© Society for Science & the Public 2000 - 2016

Keyword: Vision
Link ID: 22610 - Posted: 08.30.2016

By Anna Azvolinsky Sets of neurons in the brain that behave together—firing synchronously in response to sensory or motor stimuli—are thought to be functionally and physiologically connected. These naturally occurring ensembles of neurons are one of the ways memories may be programmed in the brain. Now, in a paper published today (August 11) in Science, researchers at Columbia University and their colleagues show that it is possible to stimulate visual cortex neurons in living, awake mice and induce a new ensemble of neurons that behave as a group and maintain their concerted firing for several days. “This work takes the concept of correlated [neuronal] firing patterns in a new and important causal direction,” David Kleinfeld, a neurophysicist at the University of California, San Diego, who was not involved in the work told The Scientist. “In a sense, [the researchers] created a memory for a visual feature that does not exist in the physical world as a proof of principal of how real visual memories are formed.” “Researchers have previously related optogenetic stimulation to behavior [in animals], but this study breaks new ground by investigating the dynamics of neural activity in relation to the ensemble to which these neurons belong,” said Sebastian Seung, a computational neuroscientist at the Princeton Neuroscience Institute in New Jersey who also was not involved in the study. Columbia’s Rafael Yuste and colleagues stimulated randomly selected sets of individual neurons in the visual cortices of living mice using two-photon stimulation while the animals ran on a treadmill. © 1986-2016 The Scientist

Keyword: Learning & Memory; Vision
Link ID: 22558 - Posted: 08.13.2016

Amy McDermott You’ve got to see it to be it. A heightened sense of red color vision arose in ancient reptiles before bright red skin, scales and feathers, a new study suggests. The finding bolsters evidence that dinosaurs probably saw red and perhaps displayed red color. The new finding, published in the Aug. 17 Proceedings of the Royal Society B, rests on the discovery that birds and turtles share a gene used both for red vision and red coloration. More bird and turtle species use the gene, called CYP2J19, for vision than for coloration, however, suggesting that its first job was in sight. “We have this single gene that has two very different functions,” says evolutionary biologist Nicholas Mundy of the University of Cambridge. Mundy’s team wondered which function came first: the red vision or the ornamentation. In evolution, what an animal can see is often linked with what others can display, says paleontologist Martin Sander of the University of Bonn in Germany, who did not work on the new study. “We’re always getting at color from these two sides,” he says, because the point of seeing a strong color is often reading visual signals. Scientists already knew that birds use CYP2J19 for vision and color. In bird eyes, the gene contains instructions for making bright red oil droplets that filter red light. Other forms of red color vision evolved earlier in other animals, but this form allows birds to see more shades of red than humans can. Elsewhere in the body, the same gene can code for pigments that stain feathers red. Turtles are the only other land vertebrates with bright red oil droplets in their eyes. But scientists weren’t sure if the same gene was responsible, Mundy says. |© Society for Science & the Public 2000 - 2016

Keyword: Vision; Evolution
Link ID: 22535 - Posted: 08.10.2016

Davide Castelvecchi People can detect flashes of light as feeble as a single photon, an experiment has demonstrated — a finding that seems to conclude a 70-year quest to test the limits of human vision. The study, published in Nature Communications on 19 July1, “finally answers a long-standing question about whether humans can see single photons — they can!” says Paul Kwiat, a quantum optics researcher at the University of Illinois at Urbana–Champaign. The techniques used in the study also open up ways of testing how quantum properties — such as the ability of photons to be in two places at the same time — affect biology, he adds. “The most amazing thing is that it’s not like seeing light. It’s almost a feeling, at the threshold of imagination,” says Alipasha Vaziri, a physicist at the Rockefeller University in New York City, who led the work and tried out the experience himself. Experiments on cells from frogs have shown that sensitive light-detecting cells in vertebrate eyes, called rod cells, do fire in response to single photons2. But, in part because the retina processes its information to reduce ‘noise’ from false alarms, researchers hadn’t been able to confirm whether the firing of one rod cell would trigger a signal that would be transmitted all the way to the brain. Nor was it clear whether people would be able to consciously sense such a signal if it did reach the brain. Experiments to test the limits of human vision have also had to wait for the arrival of quantum-optics technologies that can reliably produce one photon of light at a time. © 2016 Macmillan Publishers Limited

Keyword: Vision
Link ID: 22461 - Posted: 07.20.2016

Jon Hamilton Letting mice watch Orson Welles movies may help scientists explain human consciousness. At least that's one premise of the Allen Brain Observatory, which launched Wednesday and lets anyone with an Internet connection study a mouse brain as it responds to visual information. "Think of it as a telescope, but a telescope that is looking at the brain," says Christof Koch, chief scientific officer of the Allen Institute for Brain Science, which created the observatory. The hope is that thousands of scientists and would-be scientists will look through that telescope and help solve one of the great mysteries of human consciousness, Koch says. "You look out at the world and there's a picture in your head," he says. "You see faces, you see your wife, you see something on TV." But how does the brain create those images from the chaotic stream of visual information it receives? "That's the mystery," Koch says. There's no easy way to study a person's brain as it makes sense of visual information. So the observatory has been gathering huge amounts of data on mice, which have a visual system that is very similar to the one found in people. The data come from mice that run on a wheel as still images and movies appear on a screen in front of them. For the mice, it's a lot like watching TV on a treadmill at the gym. But these mice have been genetically altered in a way that allows a computer to monitor the activity of about 18,000 neurons as they respond to different images. "We can look at those neurons and from that decode literally what goes through the mind of the mouse," Koch says. Those neurons were pretty active when the mice watched the first few minutes of Orson Welles' film noir classic Touch of Evil. The film is good for mouse experiments because "It's black and white and it has nice contrasts and it has a long shot without having many interruptions," Koch says. © 2016 npr

Keyword: Vision; Consciousness
Link ID: 22438 - Posted: 07.14.2016

By Karen Weintraub Researchers at Stanford University have coaxed brain cells involved in vision to regrow and make functional connections—helping to upend the conventional dogma that mammalian brain cells, once damaged, can never be restored. The work was carried out in visually impaired mice but suggests that human maladies including glaucoma, Alzheimer’s disease and spinal cord injuries might be more repairable than has long been believed. Frogs, fish and chickens are known to regrow brain cells, and previous research has offered clues that it might be possible in mammals. The Stanford scientists say their new study confirms this and shows that, although fewer than 5 percent of the damaged retinal ganglion cells grew back, it was still enough to make a difference in the mice’s vision. “The brain is very good at coping with deprived inputs,” says Andrew Huberman, the Stanford neurobiologist who led the work. “The study also supports the idea that we may not need to regenerate every neuron in a system to get meaningful recovery.” Other researchers praised the study, published Monday in Nature Neuroscience. “I think it’s a significant step forward toward getting to the point where we really can regenerate optic nerves,” says Don Zack, a professor of ophthalmology at Johns Hopkins University who was not involved in the research. He calls it “one more indication that it may be possible to bring that ability back in humans.” © 2016 Scientific American

Keyword: Vision; Regeneration
Link ID: 22428 - Posted: 07.12.2016

By Shayla Love In 2005, astronaut John Phillips took a break from his work on the International Space Station and looked out the window at Earth. He was about halfway through a mission that had begun in April and would end in October. When he gazed down at the planet, the Earth was blurry. He couldn’t focus on it clearly. That was strange — his vision had always been 20/20. He wondered: Was his eyesight getting worse? “I’m not sure if I reported that to the ground,” he said. “I think I didn’t. I thought it would be something that would just go away, and fix itself when I got to Earth.” It didn’t go away. During Phillips’ post-flight physical, NASA found that his vision had gone from 20/20 to 20/100 in six months. John Phillips began experiencing sight issues during his time on the International Space Station in 2005, but was reluctant to say anything while in space. (NASA) Rigorous testing followed. Phillips got MRIs, retinal scans, neurological tests and a spinal tap. The tests showed that not only had his vision changed, but his eyes had changed as well. The backs of his eyes had gotten flatter, pushing his retinas forward. He had choroidal folds, which are like stretch marks. His optic nerves were inflamed. Phillips case became the first widely recognized one of a mysterious syndrome that affects 80 percent of astronauts on long-duration missions in space. The syndrome could interfere with plans for future crewed space missions, including any trips to Mars.

Keyword: Vision
Link ID: 22422 - Posted: 07.11.2016

By Patrick Monahan Animals like cuttlefish and octopuses can rapidly change color to blend into the background and dazzle prospective mates. But there’s only one problem: As far as we know, they can’t see in color. Unlike our eyes, the eyes of cephalopods—cuttlefish, octopuses, and their relatives—contain just one kind of color-sensitive protein, apparently restricting them to a black and white view of the world. But a new study shows how they might make do. By rapidly focusing their eyes at different depths, cephalopods could be taking advantage of a lensing property called “chromatic blur.” Each color of light has a different wavelength—and because lenses bend some wavelengths more than others, one color of light shining through a lens can be in focus while another is still blurry. So with the right kind of eye, a quick sweep of focus would let the viewer figure out the actual color of an object based on when it blurs. The off-center pupils of many cephalopods—including the w-shaped pupils of cuttlefish (above)—make this blurring effect more extreme, according to a study published this week in the Proceedings of the National Academy of Sciences. In that study, scientists built a computer model of an octopus eye and showed that—for an object at least one body length away—it could determine the object’s color just by changing focus. Because this is all still theoretical, the next step is testing whether live cephalopods actually see color this way—and whether any other “colorblind” animals might, too. © 2016 American Association for the Advancement of Science.

Keyword: Vision; Evolution
Link ID: 22402 - Posted: 07.07.2016

By Jessica Hamzelou Imagine if each of the words in this article had their own taste, or the music you’re listening to played out as visual scene in your mind. For synaesthetes – a small proportion of people whose senses intertwine – this is the stuff of the every day. “Most people describe it as a gift,” says Jamie Ward, a neuroscientist at the University of Sussex in the UK. Now, he and his colleagues have found a new form of synaesthesia – one that moves beyond written language to sign language. It is the first time the phenomenon has been observed. “People with synaesthesia experience the ordinary world in extraordinary ways,” says Ward. In theory, any two senses can overlap. Some synaesthetes connect textures with words, while others can taste them. More commonly, written letters seem to have corresponding colours. An individual synaesthete may always associate the letter A with the colour pink, for instance. This type of synaesthesia has been found across many written languages, prompting Ward’s team to wonder if it can also apply to sign language. They recruited 50 volunteers with the type of synaesthesia that means they experience colours with letters, around half of whom were fluent in sign language too. All the participants watched a video of sign language and were asked if it triggered any colours. © Copyright Reed Business Information Ltd.

Keyword: Vision
Link ID: 22390 - Posted: 07.02.2016

When you walk into a room, your eyes process your surroundings immediately: refrigerator, sink, table, chairs. "This is the kitchen," you realize. Your brain has taken data and come to a clear conclusion about the world around you, in an instant. But how does this actually happen? Elissa Aminoff, a research scientist in the Department of Psychology and the Center for the Neural Basis of Cognition at Carnegie Mellon University, shares her insights on what computer modeling can tell us about human vision and memory. What do you do? What interests me is how the brain and the mind understand our visual environment. The visual world is really rich with information, and it’s extremely complex. So we have to find ways to break visual data down. What specific parts of our [visual] world is the brain using to give us what we see? In order to answer that question, we’re collaborating with computer scientists and using computer vision algorithms. The goal is to compare these digital methods with the brain. Perhaps they can help us find out what types of data the brain is working with. Does that mean that our brains function like a computer? That’s something you hear a lot about these days. No, I wouldn’t say that. It’s that computers are giving us the closest thing that we have right now to an analogous mechanism. The brain is really, really complex. It deals with massive amounts of data. We need help in organizing these data and computers can do that. Right now, there are algorithms that can identify an object as a phone or as a mug, just like the brain. But are they doing the same thing? Probably not. © 2016 Scientific American,

Keyword: Vision
Link ID: 22379 - Posted: 06.30.2016

By Aviva Rutkin Machine minds are often described as black boxes, their decision-making processes all but inscrutable. But in the case of machine intelligence, researchers are cracking that black box open and peering inside. What they find is that humans and machines don’t pay attention to the same things when they look at pictures – not at all. Researchers at Facebook and Virginia Tech in Blacksburg got humans and machines to look at pictures and answer simple questions – a task that neural-network-based artificial intelligence can handle. But the researchers weren’t interested in the answers. They wanted to map human and AI attention, in order to shed a little light on the differences between us and them. “These attention maps are something we can measure in both humans and machines, which is pretty rare,” says Lawrence Zitnick at Facebook AI Research. Comparing the two could provide insight “into whether computers are looking in the right place”. First, Zitnick and his colleagues asked human workers on Amazon Mechanical Turk to answer simple questions about a set of pictures, such as “What is the man doing?” or “What number of cats are lying on the bed?” Each picture was blurred, and the worker would have to click around to sharpen it. A map of those clicks served as a guide to what part of the picture they were paying attention to. © Copyright Reed Business Information Ltd.

Keyword: Vision
Link ID: 22378 - Posted: 06.30.2016

Worldwide voting for the BEST ILLUSION OF THE YEAR will take place online from 4pm EST on June 29th to 4pm EST on June 30th. The winning illusions will receive a $3,000 award for 1st place, a $2,000 award for 2nd place, and a $1,000 award for 3rd place. Anybody with an internet connection (that means YOU!) can vote to pick the Top 3 Winners from the current Top 10 List! The Best illusion of the Year Contest is a celebration of the ingenuity and creativity of the world’s premier illusion research community. Contestants from all around the world submitted novel illusions (unpublished, or published no earlier than 2015), and an international panel of judges rated them and narrowed them to the TOP TEN.

Keyword: Vision
Link ID: 22375 - Posted: 06.29.2016

By Vinicius Donisete Goulart The “new world” monkeys of South and Central America range from large muriquis to tiny pygmy marmosets. Some are cute and furry, others bald and bright red, and one even has an extraordinary moustache. Yet, with the exception of owl and howler monkeys, the 130 or so remaining species have one thing in common: A good chunk of the females, and all of the males, are colorblind. This is quite different from “old world” primates, including us Homo sapiens, who are routinely able to see the world in what we humans imagine as full color. In evolutionary terms, colorblindness sounds like a disadvantage, one which should really have been eliminated by natural selection long ago. So how can we explain a continent of the colorblind monkeys? I have long wondered what makes primates in the region colorblind and visually diverse, and how evolutionary forces are acting to maintain this variation. We don’t yet know exactly what kept these seemingly disadvantaged monkeys alive and flourishing—but what is becoming clear is that colorblindness is an adaptation not a defect. The first thing to understand is that what we humans consider “color” is only a small portion of the spectrum. Our “trichromatic” vision is superior to most mammals, who typically share the “dichromatic” vision of new world monkeys and colorblind humans, yet fish, amphibians, reptiles, birds, and even insects are able to see a wider range, even into the UV spectrum. There is a whole world of color out there that humans and our primate cousins are unaware of. What is becoming clear is that color blindness is an adaptation not a defect.

Keyword: Vision; Evolution
Link ID: 22345 - Posted: 06.22.2016