CMU-CS-15-128 Computer Science Department School of Computer Science, Carnegie Mellon University
Cortical inference of Visual Input Esha Uboweja August 2015 M.S. Thesis
Vision is one of the most vital and complex functions of the brain. Photoreceptors in the eye encode light intensities and ganglion cells in the retina generate spike trains to convey information to the visual cortex. These spike trains are sparse and stochastic and a mental image of the visual scene can be recovered by the brain only by integrating the spikes over time. However, the left and the right eyes jitter incessantly and independently every millisecond, preventing a simple integration process. The brain has to infer the image representation of the visual scene while simultaneously computing the net eye jitter trajectory. This is a difficult chicken-and-egg problem. It is intriguing that the visual system can use information from sparse and corrupted spike trains to infer what the eyes see, clearly with every intricate detail. Burak et.al. demonstrate the inference of binary images of static scenes in the presence of eye jitter using retinal spike train input via a Factorized Bayesian Decoder. But how does the visual cortex infer details of objects moving in a scene? Since objects move continuously in space over time, the number of spikes emitted by the retina from any one object location are even more sparse. We seek to generalize this Bayesian factorization framework to deal with (1) a dynamic scene with an object moving relatively to a background, (2) gray-level and more naturalistic images. The system seeks to reconstruct and infer details of the moving object, and decompose that from the recovered image of the background. We demonstrate the feasibility of this framework in solving the puzzle of how the brain can construct a stabilized image of dynamic visual scenes in the presence of incessant eye movements.
75 pages
Frank Pfenning, Head, Computer Science Department
| |
Return to:
SCS Technical Report Collection This page maintained by [email protected] |