A Unifying Model of Orientation Crowding in Peripheral Vision.
Ontology highlight
ABSTRACT: Peripheral vision is fundamentally limited not by the visibility of features, but by the spacing between them [1]. When too close together, visual features can become "crowded" and perceptually indistinguishable. Crowding interferes with basic tasks such as letter and face identification and thus informs our understanding of object recognition breakdown in peripheral vision [2]. Multiple proposals have attempted to explain crowding [3], and each is supported by compelling psychophysical and neuroimaging data [4-6] that are incompatible with competing proposals. In general, perceptual failures have variously been attributed to the averaging of nearby visual signals [7-10], confusion between target and distractor elements [11, 12], and a limited resolution of visual spatial attention [13]. Here we introduce a psychophysical paradigm that allows systematic study of crowded perception within the orientation domain, and we present a unifying computational model of crowding phenomena that reconciles conflicting explanations. Our results show that our single measure produces a variety of perceptual errors that are reported across the crowding literature. Critically, a simple model of the responses of populations of orientation-selective visual neurons accurately predicts all perceptual errors. We thus provide a unifying mechanistic explanation for orientation crowding in peripheral vision. Our simple model accounts for several perceptual phenomena produced by crowding of orientation and raises the possibility that multiple classes of object recognition failures in peripheral vision can be accounted for by a single mechanism.
SUBMITTER: Harrison WJ
PROVIDER: S-EPMC4792514 | biostudies-literature | 2015 Dec
REPOSITORIES: biostudies-literature
ACCESS DATA