Chapter 7. Vision: From Eye to Brain
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Yasemin Saplakoglu Two years ago, Sarah Shomstein realized she didn’t have a mind’s eye. The vision scientist was sitting in a seminar room, listening to a scientific talk, when the presenter asked the audience to imagine an apple. Shomstein closed her eyes and did so. Then, the presenter asked the crowd to open their eyes and rate how vividly they saw the apple in their mind. Saw the apple? Shomstein was confused. She didn’t actually see an apple. She could think about an apple: its taste, its shape, its color, the way light might hit it. But she didn’t see it. Behind her eyes, “it was completely black,” Shomstein recalled. And yet, “I imagined an apple.” Most of her colleagues reacted differently. They reported actually seeing an apple, some vividly and some faintly, floating like a hologram in front of them. In that moment, Shomstein, who’s spent years researching perception at George Washington University, realized she experienced the world differently than others. She is part of a subset of people — thought to be about 1% to 4% of the general population — who lack mental imagery, a phenomenon known as aphantasia. Though it was described more than 140 years ago, the term “aphantasia” was coined only in 2015. It immediately drew the attention of anyone interested in how the imagination works. That included neuroscientists. So far, they’re finding that aphantasia is not a disorder — it’s a different way of experiencing the world. Early studies have suggested that differences in the connections between brain regions involved in vision, memory and decision-making could explain variations in people’s ability to form mental images. Because many people with aphantasia dream in images and can recognize objects and faces, it seems likely that their minds store visual information — they just can’t access it voluntarily or can’t use it to generate the experience of imagery. That’s just one explanation for aphantasia. In reality, people’s subjective experiences vary dramatically, and it’s possible that different subsets of aphantasics have their own neural explanations. Aphantasia and hyperphantasia, the opposite phenomenon in which people report mental imagery as vivid as reality, are in fact two ends of a spectrum, sandwiching an infinite range of internal experiences between them. © 2024 the Simons Foundation.
Keyword: Attention; Vision
Link ID: 29417 - Posted: 08.02.2024
By Abdullahi Tsanni Time takes its toll on the eyes. Now a funky, Hitchcockian video of 64 eyeballs, all rolling and blinking in different directions, is providing a novel visual of one way in which eyes age. A video display of 64 eyeballs, captured using eye trackers, helped researchers compare the size of younger and older study participants’ pupils under differing light conditions, confirming aging affects our eyes. Lab studies have previously shown that the eye’s pupil size shrinks as people get older, making the pupil less responsive to light. A new study that rigged volunteers up with eye-trackers and GoPro videos and sent them traipsing around a university campus has confirmed what happens in the lab happens in real life, too. While pupils remain sensitive to changing light conditions, pupil size can decrease up to about 0.4 millimeters per decade, researchers report June 19 in Royal Society Open Science. “We see a big age effect,” says Manuel Spitschan, a neuroscientist at Max Planck Institute for Biological Cybernetics in Tubingen, Germany. The change helps explain why it can be increasingly harder for people to see in dim light as they age. Light travels through the dark pupil in the center of the eye to the retina, a layer of cells in the back of the eyes that converts the light into images. The pupil’s size can vary from 2 to 8 millimeters in diameter depending on light conditions, getting smaller in bright light and larger in dim light. “With a small pupil, less light enters the eye,” Spitschan says. © Society for Science & the Public 2000–2024.
Keyword: Vision; Development of the Brain
Link ID: 29375 - Posted: 07.03.2024
By Elie Dolgin The COVID-19 pandemic didn’t just reshape how children learn and see the world. It transformed the shape of their eyeballs. As real-life classrooms and playgrounds gave way to virtual meetings and digital devices, the time that children spent focusing on screens and other nearby objects surged — and the time they spent outdoors dropped precipitously. This shift led to a notable change in children’s anatomy: their eyeballs lengthened to better accommodate short-vision tasks. Study after study, in regions ranging from Europe to Asia, documented this change. One analysis from Hong Kong even reported a near doubling in the incidence of pathologically stretched eyeballs among six-year-olds compared with pre-pandemic levels1. This elongation improves the clarity of close-up images on the retina, the light-sensitive layer at the back of the eye. But it also makes far-away objects appear blurry, leading to a condition known as myopia, or short-sightedness. And although corrective eyewear can usually address the issue — allowing children to, for example, see a blackboard or read from a distance — severe myopia can lead to more-serious complications, such as retinal detachment, macular degeneration, glaucoma and even permanent blindness. Rates of myopia were booming well before the COVID-19 pandemic. Widely cited projections in the mid-2010s suggested that myopia would affect half of the world’s population by mid-century (see ‘Rising prevalence’), which would effectively double the incidence rate in less than four decades2 (see ‘Affecting every age’). Now, those alarming predictions seem much too modest, says Neelam Pawar, a paediatric ophthalmologist at the Aravind Eye Hospital in Tirunelveli, India. “I don’t think it will double,” she says. “It will triple.” © 2024 Springer Nature Limited
Keyword: Vision; Development of the Brain
Link ID: 29329 - Posted: 05.29.2024
By Angie Voyles Askham Each time we blink, it obscures our visual world for 100 to 300 milliseconds. It’s a necessary action that also, researchers long presumed, presents the brain with a problem: how to cobble together a cohesive picture of the before and after. “No one really thought about blinks as an act of looking or vision to begin with,” says Martin Rolfs, professor of experimental psychology at Humboldt University of Berlin. But blinking may be a more important component of vision than previously thought, according to a study published last month in the Proceedings of the National Academy of Sciences. Participants performed better on a visual task when they blinked while looking at the visual stimulus than when they blinked before it appeared. The blink, the team found, caused a change in visual input that improved participants’ perception. The finding suggests that blinking is a feature of seeing rather than a bug, says Rolfs, who was not involved with the study but wrote a commentary about it. And it could explain why adults blink more frequently than is seemingly necessary, the researchers say. “The brain capitalizes on things that are changing in the visual world—whether it’s blinks or eye movements, or any type of ocular-motor dynamics,” says Patrick Mayo, a neuroscientist in the ophthalmology department at the University of Pittsburgh, who was also not involved in the work. “That is … a point that’s still not well appreciated in visual neuroscience, generally.” The researchers started their investigation by simulating a blink. In the computational model they devised, a person staring at black and white stripes would suddenly see a dark, uniform gray before once again viewing the high-contrast pattern. The interruption would cause a brief change in the stimulus input to neurons in the retina, which in turn could increase the cells’ sensitivity to stimuli right after a blink, they hypothesized. © 2024 Simons Foundation
Keyword: Vision; Attention
Link ID: 29303 - Posted: 05.14.2024
By Emily Cooke & LiveScience Optical illusions play on the brain's biases, tricking it into perceiving images differently than how they really are. And now, in mice, scientists have harnessed an optical illusion to reveal hidden insights into how the brain processes visual information. The research focused on the neon-color-spreading illusion, which incorporates patterns of thin lines on a solid background. Parts of these lines are a different color — such as lime green, in the example above — and the brain perceives these lines as part of a solid shape with a distinct border — a circle, in this case. The closed shape also appears brighter than the lines surrounding it. It's well established that this illusion causes the human brain to falsely fill in and perceive a nonexistent outline and brightness — but there's been ongoing debate about what's going on in the brain when it happens. Now, for the first time, scientists have demonstrated that the illusion works on mice, and this allowed them to peer into the rodents' brains to see what's going on. Specifically, they zoomed in on part of the brain called the visual cortex. When light hits our eyes, electrical signals are sent via nerves to the visual cortex. This region processes that visual data and sends it on to other areas of the brain, allowing us to perceive the world around us. The visual cortex is made of six layers of neurons that are progressively numbered V1, V2, V3 and so on. Each layer is responsible for processing different features of images that hit the eyes, with V1 neurons handling the first and most basic layer of data, while the other layers belong to the "higher visual areas." These neurons are responsible for more complex visual processing than V1 neurons. © 2024 SCIENTIFIC AMERICAN,
Keyword: Vision; Consciousness
Link ID: 29298 - Posted: 05.09.2024
By Lilly Tozer How the brain processes visual information — and its perception of time — is heavily influenced by what we’re looking at, a study has found. In the experiment, participants perceived the amount of time they had spent looking at an image differently depending on how large, cluttered or memorable the contents of the picture were. They were also more likely to remember images that they thought they had viewed for longer. The findings, published on 22 April in Nature Human Behaviour1, could offer fresh insights into how people experience and keep track of time. “For over 50 years, we’ve known that objectively longer-presented things on a screen are better remembered,” says study co-author Martin Wiener, a cognitive neuroscientist at George Mason University in Fairfax, Virginia. “This is showing for the first time, a subjectively experienced longer interval is also better remembered.” Research has shown that humans’ perception of time is intrinsically linked to our senses. “Because we do not have a sensory organ dedicated to encoding time, all sensory organs are in fact conveying temporal information” says Virginie van Wassenhove, a cognitive neuroscientist at the University of Paris–Saclay in Essonne, France. Previous studies found that basic features of an image, such as its colours and contrast, can alter people’s perceptions of time spent viewing the image. In the latest study, researchers set out to investigate whether higher-level semantic features, such as memorability, can have the same effect. © 2024 Springer Nature Limited
Keyword: Attention; Vision
Link ID: 29269 - Posted: 04.24.2024
By Meghan Willcoxon In the summer of 1991, the neuroscientist Vittorio Gallese was studying how movement is represented in the brain when he noticed something odd. He and his research adviser, Giacomo Rizzolatti, at the University of Parma were tracking which neurons became active when monkeys interacted with certain objects. As the scientists had observed before, the same neurons fired when the monkeys either noticed the objects or picked them up. But then the neurons did something the researchers didn’t expect. Before the formal start of the experiment, Gallese grasped the objects to show them to a monkey. At that moment, the activity spiked in the same neurons that had fired when the monkey grasped the objects. It was the first time anyone had observed neurons encode information for both an action and another individual performing that action. Those neurons reminded the researchers of a mirror: Actions the monkeys observed were reflected in their brains through these peculiar motor cells. In 1992, Gallese and Rizzolatti first described the cells in the journal Experimental Brain Research and then in 1996 named them “mirror neurons” in Brain. The researchers knew they had found something interesting, but nothing could have prepared them for how the rest of the world would respond. Within 10 years of the discovery, the idea of a mirror neuron had become the rare neuroscience concept to capture the public imagination. From 2002 to 2009, scientists across disciplines joined science popularizers in sensationalizing these cells, attributing more properties to them to explain such complex human behaviors as empathy, altruism, learning, imitation, autism and speech. Then, nearly as quickly as mirror neurons caught on, scientific doubts about their explanatory power crept in. Within a few years, these celebrity cells were filed away in the drawer of over-promised, under-delivered discoveries. Vittorio Gallese wears round glasses.
Keyword: Attention; Vision
Link ID: 29242 - Posted: 04.04.2024
Linda Geddes Science correspondent If you have wondered why your partner always beats you at tennis or one child always crushes the other at Fortnite, it seems there is more to it than pure physical ability. Some people are effectively able to see more “images per second” than others, research suggests, meaning they’re innately better at spotting or tracking fast-moving objects such as tennis balls. The rate at which our brains can discriminate between different visual signals is known as temporal resolution, and influences the speed at which we are able to respond to changes in our environment. Previous studies have suggested that animals with high visual temporal resolution tend to be species with fast-paced lives, such as predators. Human research has also suggested that this trait tends to decrease as we get older, and dips temporarily after intense exercise. However, it was not clear how much it varies between people of similar ages. One way of measuring this trait is to identify the point at which someone stops perceiving a flickering light to flicker, and sees it as a constant or still light instead. Clinton Haarlem, a PhD candidate at Trinity College Dublin, and his colleagues tested this in 80 men and women between the ages of 18 and 35, and found wide variability in the threshold at which this happened. The research, published in Plos One, found that some people reported a light source as constant when it was in fact flashing about 35 times a second, while others could still detect flashes at rates of greater than 60 times a second. © 2024 Guardian News & Media Limited
Keyword: Vision
Link ID: 29233 - Posted: 04.02.2024
By Viviane Callier Biologists have often wondered what would happen if they could rewind the tape of life’s history and let evolution play out all over again. Would lineages of organisms evolve in radically different ways if given that opportunity? Or would they tend to evolve the same kinds of eyes, wings, and other adaptive traits because their previous evolutionary histories had already sent them down certain developmental pathways? A new paper published in Science this February describes a rare and important test case for that question, which is fundamental to understanding how evolution and development interact. A team of researchers at the University of California, Santa Barbara happened upon it while studying the evolution of vision in an obscure group of mollusks called chitons. In that group of animals, the researchers discovered that two types of eyes—eyespots and shell eyes—each evolved twice independently. A given lineage could evolve one type of eye or the other, but never both. Intriguingly, the type of eye that a lineage had was determined by a seemingly unrelated older feature: the number of slits in the chiton’s shell armor. This represents a real-world example of “path-dependent evolution,” in which a lineage’s history irrevocably shapes its future evolutionary trajectory. Critical junctures in a lineage act like one-way doors, opening up some possibilities while closing off other options for good. “This is one of the first cases [where] we’ve actually been able to see path-dependent evolution,” said Rebecca Varney, a postdoctoral fellow in Todd Oakley’s lab at UCSB and the lead author of the new paper. Although path-dependent evolution has been observed in some bacteria grown in labs, “showing that in a natural system was a really exciting thing to be able to do.” © 2024 NautilusNext Inc.,
Keyword: Vision; Evolution
Link ID: 29203 - Posted: 03.21.2024
By Saima Sidik Eye diseases long thought to be purely genetic might be caused in part by bacteria that escape the gut and travel to the retina, research suggests1. Eyes are typically thought to be protected by a layer of tissue that bacteria can’t penetrate, so the results are “unexpected”, says Martin Kriegel, a microbiome researcher at the University of Münster in Germany, who was not involved in the work. “It’s going to be a big paradigm shift,” he adds. The study was published on 26 February in Cell. Inherited retinal diseases, such as retinitis pigmentosa, affect about 5.5 million people worldwide. Mutations in the gene Crumbs homolog 1 (CRB1) are a leading cause of these conditions, some of which cause blindness. Previous work2 suggested that bacteria are not as rare in the eyes as ophthalmologists had previously thought, leading the study’s authors to wonder whether bacteria cause retinal disease, says co-author Richard Lee, an ophthalmologist then at the University College London. CRB1 mutations weaken linkages between cells lining the colon in addition to their long-observed role in weakening the protective barrier around the eye, Lee and his colleagues found. This motivated study co-author Lai Wei, an ophthalmologist at Guangzhou Medical University in China, to produce Crb1-mutant mice with depleted levels of bacteria. These mice did not show evidence of distorted cell layers in the retina, unlike their counterparts with typical gut flora. Furthermore, treating the mutant mice with antibiotics reduced the damage to their eyes, suggesting that people with CRB1 mutations could benefit from antibiotics or from anti-inflammatory drugs that reduce the effects of bacteria. “If this is a novel mechanism that is treatable, it will transform the lives of many families,” Lee says. © 2024 Springer Nature Limited
Keyword: Vision
Link ID: 29167 - Posted: 02.27.2024
By Angie Voyles Askham The primary visual cortex carries, well, visual information — or so scientists thought until early 2010. That’s when a team at the University of California, San Francisco first described vagabond activity in the brain area, called V1, in mice. When the animals started to run on a treadmill, some neurons more than doubled their firing rate. The finding “was kind of mysterious,” because V1 was thought to represent only visual signals transmitted from the retina, says Anne Churchland, professor of neurobiology at the University of California, Los Angeles, who was not involved in that work. “The idea that running modulated neural activity suggested that maybe those visual signals were corrupted in a way that, at the time, felt like it would be really problematic.” The mystery grew over the next decade, as a flurry of mouse studies from Churchland and others built on the 2010 results. Both arousal and locomotion could shape the firing of primary visual neurons, those newer findings showed, and even subtle movements such as nose scratches contribute to variance in population activity, all without compromising the sensory information. A consensus started to form around the idea that sensory cortical regions encode broader information about an animal’s physiological state than previously thought. At least until last year, when two studies threw a wrench into that storyline: Neither marmosets nor macaque monkeys show any movement-related increase in V1 signaling. Instead, running seems to slightly suppress V1 activity in marmosets, and spontaneous movements have no effect on the same cells in macaques. The apparent differences across species raise new questions about whether mice are a suitable model to study the primate visual system, says Michael Stryker, professor of physiology at the University of California, San Francisco, who led the 2010 work. “Maybe the primate’s V1 is not working the same as in the mouse,” he says. “As I see it, it’s still a big unanswered question.” © 2024 Simons Foundation
Keyword: Vision
Link ID: 29153 - Posted: 02.20.2024
By Kevin Mitchell It is often said that “the mind is what the brain does.” Modern neuroscience has indeed shown us that mental goings-on rely on and are in some sense entailed by neural goings-on. But the truth is that we have a poor handle on the nature of that relationship. One way to bridge that divide is to try to define the relationship between neural and mental representations. The basic premise of neuroscience is that patterns of neural activity carry some information — they are about something. But not all such patterns need be thought of as representations; many of them are just signals. Simple circuits such as the muscle stretch reflex or the eye-blink reflex, for example, are configured to respond to stimuli such as the lengthening of a muscle or a sudden bright light. But they don’t need to internally represent this information — or make that information available to other parts of the nervous system. They just need to respond to it. More complex information processing, by contrast, such as in our image-forming visual system, requires internal neural representation. By integrating signals from multiple photoreceptors, retinal ganglion cells carry information about patterns of light in the visual stimulus — particularly edges where the illumination changes from light to dark. This information is then made available to the thalamus and the cortical hierarchy, where additional processing goes on to extract higher- and higher-order features of the entire visual scene. Scientists have elucidated the logic of these hierarchical systems by studying the types of stimuli to which neurons are most sensitively tuned, known as “receptive fields.” If some neuron in an early cortical area responds selectively to, say, a vertical line in a certain part of the visual field, the inference is that when such a neuron is active, that is the information that it is representing. In this case, it is making that information available to the next level of the visual system — itself just a subsystem of the brain. © 2024 Simons Foundation
Keyword: Consciousness; Vision
Link ID: 29148 - Posted: 02.13.2024
By Shruti Ravindran When preparing to become a butterfly, the Eastern Black Swallowtail caterpillar wraps its bright striped body within a leaf. This leaf is its sanctuary, where it will weave its chrysalis. So when the leaf is disturbed by a would-be predator—a bird or insect—the caterpillar stirs into motion, briefly darting out a pair of fleshy, smelly horns. To humans, these horns might appear yellow—a color known to attract birds and many insects—but from a predator’s-eye-view, they appear a livid, almost neon violet, a color of warning and poison for some birds and insects. “It’s like a jump scare,” says Daniel Hanley, an assistant professor of biology at George Mason University. “Startle them enough, and all you need is a second to get away.” Hanley is part of a team that has developed a new technique to depict on video how the natural world looks to non-human species. The method is meant to capture how animals use color in unique—and often fleeting—behaviors like the caterpillar’s anti-predator display. Most animals, birds, and insects possess their own ways of seeing, shaped by the light receptors in their eyes. Human retinas, for example, are sensitive to three wavelengths of light—blue, green, and red—which enables us to see approximately 1 million different hues in our environment. By contrast, many mammals, including dogs, cats, and cows, sense only two wavelengths. But birds, fish, amphibians, and some insects and reptiles typically can sense four—including ultraviolet light. Their worlds are drenched in a kaleidoscope of color—they can often see 100 times as many shades as humans do. Hanley’s team, which includes not just biologists but multiple mathematicians, a physicist, an engineer, and a filmmaker, claims that their method can translate the colors and gradations of light perceived by hundreds of animals to a range of frequencies that human eyes can comprehend with an accuracy of roughly 90 percent. That is, they can simulate the way a scene in a natural environment might look to a particular species of animal, what shifting shapes and objects might stand out most. The team uses commercially available cameras to record video in four color channels—blue, green, red, and ultraviolet—and then applies open source software to translate the picture according to the mix of light receptor sensitivities a given animal may have. © 2024 NautilusNext Inc.,
Keyword: Vision; Evolution
Link ID: 29133 - Posted: 02.06.2024
By Mark Johnson There had been early clues, but it was a family game of dominoes around Christmas 2021 that convinced Susan Stewart that something was wrong with her husband. Charlie Stewart, then 75 and retired, struggled to match the dots on different domino tiles. Susan assumed it was a vision problem. Charlie’s memory was fine, and he had no family history of dementia. But months later the Marin County, Calif., couple were shocked to learn that his domino confusion was a sign he had a lesser-known variant of Alzheimer’s disease. For patients with this variant, called posterior cortical atrophy, the disease begins with problems affecting vision rather than memory. The unusual early symptoms mean that thousands of people may go years before receiving the correct diagnosis, experts said. That may change with the first large-scale international study of the condition, published Monday in the journal Lancet Neurology. An international team led by researchers at the University of California at San Francisco studied records of 1,092 PCA patients from 16 countries and found that, on average, the syndrome begins affecting patients at age 59 ― about five to six years earlier than most patients with the more common form of Alzheimer’s. Although the number of patients with PCA has not been established, researchers say that the variant may account for as many as 10 percent of all Alzheimer’s cases; that would put the number of Americans with the condition close to 700,000. “We have a lot of work to do to raise awareness about the syndrome,” said Gil D. Rabinovici, one of the study’s authors and director of the UCSF Alzheimer’s Disease Research Center. “One thing that we found in our large study is that by the time people are diagnosed, they’ve had [the disease] for quite a few years.” The study authors said they hope greater awareness of the syndrome will help doctors diagnose it earlier and will encourage researchers to include patients with PCA in future Alzheimer’s clinical trials. Unusual symptoms delay diagnosis
Keyword: Alzheimers; Vision
Link ID: 29107 - Posted: 01.23.2024
By Jaimie Seaton It’s not uncommon for Veronica Smith to be looking at her partner’s face when suddenly she sees his features changing—his eyes moving closer together and then farther apart, his jawline getting wider and narrower, and his skin moving and shimmering. Smith, age 32, has experienced this phenomenon when looking at faces since she was four or five years old, and while it’s intermittent when she’s viewing another person’s face, it’s more constant when she views her own. “I almost always experience it when I look at my own face in the mirror, which makes it really hard to get ready because I’ll think that I look weird,” Smith explains. “I can more easily tell that I’m experiencing distortions when I’m looking at other people because I know what they look like.” Smith has a rare condition called prosopometamorphopsia (PMO), in which faces appear distorted in shape, texture, position or color. (PMO is related to Alice in Wonderland syndrome, or AIWS, which distorts the size perception of objects or one’s own body.) PMO has fascinated many scientists. The late neurologist and writer Oliver Sacks co-wrote a paper on the condition that was published in 2014, the year before he died. Brad Duchaine, a professor of psychological and brain sciences at Dartmouth College, explains that some people with it see distortions that affect the whole face (bilateral PMO) while others see only the left or right half of a face as distorted (hemi-PMO). “Not surprisingly, people with PMO find the distortions extremely distressing. Over the last century, approximately 75 cases have been reported in the literature. However, little is known about the condition because cases with face distortions have usually been documented by neurologists who don’t have expertise in visual neuroscience or the time to study the cases in depth,” Duchaine says. For 25 years Duchaine’s work has focused on prosopagnosia (face blindness), but after co-authoring a study on hemi-PMO that was published in 2020, Duchaine shifted much of his lab’s work to PMO. © 2023 SCIENTIFIC AMERICAN,
Keyword: Attention; Vision
Link ID: 29051 - Posted: 12.16.2023
By Roberta McLain Dreams have fascinated people for millennia, yet we struggle to understand their purpose. Some theories suggest dreams help us deal with emotions, solve problems or manage hidden desires. Others postulate that they clean up brain waste, make memories stronger or deduce the meaning of random brain activity. A more recent theory suggests nighttime dreams protect visual areas of the brain from being co-opted during sleep by other sensory functions, such as hearing or touch. David Eagleman, a neuroscientist at Stanford University, has proposed the idea that dreaming is necessary to safeguard the visual cortex—the part of the brain responsible for processing vision. Eagleman’s theory takes into account that the human brain is highly adaptive, with certain areas able to take on new tasks, an ability called neuroplasticity. He argues that neurons compete for survival. The brain, Eagleman explains, distributes its resources by “implementing a do-or-die competition” for brain territory in which sensory areas “gain or lose neural territory when inputs slow, stop or shift.” Experiences over a lifetime reshape the map of the brain. “Just like neighboring nations, neurons stake out their territory and chronically defend them,” he says. Eagleman points to children who have had half their brain removed because of severe health problems and then regain normal function. The remaining brain reorganizes itself and takes over the roles of the missing sections. Similarly, people who lose sight or hearing show heightened sensitivity in the remaining senses because the region of the brain normally used by the lost sense is taken over by other senses. Reorganization can happen fast. Studies published in 2007 and 2008 by Lotfi Merabet of Harvard Medical School and his colleagues showed just how quickly this takeover can happen. The 2008 study, in which subjects were blindfolded, revealed that the seizing of an idle area by other senses begins in as little as 90 minutes. And other studies found that this can occur within 45 minutes. When we sleep, we can smell, hear and feel, but visual information is absent—except during REM sleep. © 2023 SCIENTIFIC AMERICAN,
By Meghan Rosen In endurance athletes, some brain power may come from an unexpected source. Marathon runners appear to rely on myelin, the fatty tissue bundled around nerve fibers, for energy during a race, scientists report October 10 in a paper posted at bioRxiv.org. In the day or two following a marathon, this tissue seems to dwindle drastically, brain scans of runners reveal. Two weeks after the race, the brain fat bounces back to nearly prerace levels. The find suggests that the athletes burn so much energy running that they need to tap into a new fuel supply to keep the brain operating smoothly. “This is definitely an intriguing observation,” says Mustapha Bouhrara, a neuroimaging scientist at the National Institute on Aging in Baltimore. “It is quite plausible that myelin lipids are used as fuel in extended exercise.” If what the study authors are seeing is real, he says, the work could have therapeutic implications. Understanding how runners’ myelin recovers so rapidly might offer clues for developing potential treatments — like for people who’ve lost myelin due to aging or neurodegenerative disease. Much of the human brain contains myelin, tissue that sheathes nerve fibers and acts as an insulator, like rubber coating an electrical wire. That insulation lets electrical messages zip from nerve cell to nerve cell, allowing high-speed communication that’s crucial for brain function. The fatty tissue seems to be a straightforward material with a straightforward job, but there’s likely more to it than that, says Klaus-Armin Nave, a neurobiologist at the Max Planck Institute for Multidisciplinary Sciences in Göttingen, Germany. “For the longest time, it was thought that myelin sheathes were assembled, inert structures of insulation that don’t change much after they’re made,” he says. Today, there’s evidence that myelin is a dynamic structure, growing and shrinking in size and abundance depending on cellular conditions. The idea is called myelin plasticity. “It’s hotly researched,” Nave says. © Society for Science & the Public 2000–2023.
Keyword: Glia; Multiple Sclerosis
Link ID: 28983 - Posted: 11.01.2023
By Jacqueline Howard and Deidre McPhillips, Most families of children with autism may face long wait times to diagnose their child with the disorder, and once a diagnosis is made, it sometimes may not be definitive. But now, two studies released Tuesday suggest that a recently developed eye-tracking tool could help clinicians diagnose children as young as 16 months with autism – and with more certainty. Kids’ developmental disability diagnoses became more common during pandemic, but autism rates held steady, CDC report says “This is not a tool to replace expert clinicians,” said Warren Jones, director of research at the Marcus Autism Center at Children’s Healthcare of Atlanta and Nien Distinguished Chair in Autism at Emory University School of Medicine, who was an author on both studies. Rather, he said, the hope with this eye-tracking technology is that “by providing objective measurements that objectively measure the same thing in each child,” it can help inform the diagnostic process. The tool, called EarliPoint Evaluation, is cleared by the US Food and Drug Administration to help clinicians diagnose and assess autism, according to the researchers. Traditionally, children are diagnosed with autism based on a clinician’s assessment of their developmental history, behaviors and parents’ reports. Evaluations can take hours, and some subtle behaviors associated with autism may be missed, especially among younger children. “Typically, the way we diagnose autism is by rating our impressions,” said Whitney Guthrie, a clinical psychologist and scientist at the Children’s Hospital of Philadelphia’s Center for Autism Research. She was not involved in the new studies, but her research focuses on early diagnosis of autism.
Keyword: Autism; Schizophrenia
Link ID: 28904 - Posted: 09.13.2023
Jean Bennett Gene therapy is a set of techniques that harness DNA or RNA to treat or prevent disease. Gene therapy treats disease in three primary ways: by substituting a disease-causing gene with a healthy new or modified copy of that gene; turning genes on or off; and injecting a new or modified gene into the body. Get facts about the coronavirus pandemic and the latest research How has gene therapy changed how doctors treat genetic eye diseases and blindness? In the past, many doctors did not think it necessary to identify the genetic basis of eye disease because treatment was not yet available. However, a few specialists, including me and my collaborators, identified these defects in our research, convinced that someday treatment would be made possible. Over time, we were able to create a treatment designed for individuals with particular gene defects that lead to congenital blindness. This development of gene therapy for inherited disease has inspired other groups around the world to initiate clinical trials targeting other genetic forms of blindness, such as choroideremia, achromatopsia, retinitis pigmentosa and even age-related macular degeneration, all of which lead to vision loss. There are at least 40 clinical trials enrolling patients with other genetic forms of blinding disease. Gene therapy is even being used to restore vision to people whose photoreceptors – the cells in the retina that respond to light – have completely degenerated. This approach uses optogenetic therapy, which aims to revive those degenerated photoreceptors by adding light-sensing molecules to cells, thereby drastically improving a person’s vision. © 2010–2023, The Conversation US, Inc.
Keyword: Vision; Genes & Behavior
Link ID: 28781 - Posted: 05.13.2023
A National Institutes of Health team has identified a compound already approved by the U.S. Food and Drug Administration that keeps light-sensitive photoreceptors alive in three models of Leber congenital amaurosis type 10 (LCA 10), an inherited retinal ciliopathy disease that often results in severe visual impairment or blindness in early childhood. LCA 10 is caused by mutations of the cilia-centrosomal gene (CEP290). Such mutations account for 20% to 25% of all LCA – more than any other gene. In addition to LCA, CEP290 mutations can cause multiple syndromic diseases involving a range of organ systems. Using a mouse model of LCA10 and two types of lab-created tissues from stem cells known as organoids, the team screened more than 6,000 FDA-approved compounds to identify ones that promoted survival of photoreceptors, the types of cells that die in LCA, leading to vision loss. The high-throughput screening identified five potential drug candidates, including Reserpine, an old medication previously used to treat high blood pressure. Observation of the LCA models treated with Reserpine shed light on the underlying biology of retinal ciliopathies, suggesting new targets for future exploration. Specifically, the models showed a dysregulation of autophagy, the process by which cells break down old or abnormal proteins, which in this case resulted in abnormal primary cilia, a microtubule organelle that protrudes from the surface of most cell types. In LCA10, CEP290 gene mutations cause dysfunction of the primary cilium in retinal cells. Reserpine appeared to partially restore autophagy, resulting in improved primary cilium assembly.
Keyword: Vision
Link ID: 28720 - Posted: 03.29.2023