Invariant texture perception is harder with synthetic textures: Implications for models of texture processing.
Ontology highlight
ABSTRACT: Texture synthesis models have become a popular tool for studying the representations supporting texture processing in human vision. In particular, the summary statistics implemented in the Portilla-Simoncelli (P-S) model support high-quality synthesis of natural textures, account for performance in crowding and search tasks, and may account for the response properties of V2 neurons. We chose to investigate whether or not these summary statistics are also sufficient to support texture discrimination in a task that required illumination invariance. Our observers performed a match-to-sample task using natural textures photographed with either diffuse overhead lighting or lighting from the side. Following a briefly presented sample texture, participants identified which of two test images depicted the same texture. In the illumination change condition, illumination differed between the sample and the matching test image. In the no change condition, sample textures and matching test images were identical. Critically, we generated synthetic versions of these images using the P-S model and also tested participants with these. If the statistics in the P-S model are sufficient for invariant texture perception, performance with synthetic images should not differ from performance in the original task. Instead, we found a significant cost of applying texture synthesis in both lighting conditions. We also observed this effect when power-spectra were matched across images (Experiment 2) and when sample and test images were drawn from unique locations in the parent textures to minimize the contribution of image-based processing (Experiment 3). Invariant texture processing thus depends upon measurements not implemented in the P-S algorithm.
SUBMITTER: Balas B
PROVIDER: S-EPMC4529380 | biostudies-literature | 2015 Oct
REPOSITORIES: biostudies-literature
ACCESS DATA