Unknown

Dataset Information

0

Feature saliency and feedback information interactively impact visual category learning.


ABSTRACT: Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object's features most relevant for categorization, while 'filtering out' irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a 'cognitive loop paradox' where two interdependent learning processes have to take place simultaneously.

SUBMITTER: Hammer R 

PROVIDER: S-EPMC4333777 | biostudies-literature | 2015

REPOSITORIES: biostudies-literature

altmetric image

Publications

Feature saliency and feedback information interactively impact visual category learning.

Hammer Rubi R   Sloutsky Vladimir V   Grill-Spector Kalanit K  

Frontiers in psychology 20150219


Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object's features most relevant for categorization, while 'filtering out' irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how  ...[more]

Similar Datasets

| S-EPMC5239721 | biostudies-literature
| S-EPMC5216849 | biostudies-literature
| S-EPMC10237412 | biostudies-literature
| S-EPMC6380202 | biostudies-literature
| S-EPMC6482054 | biostudies-literature
| S-EPMC3946254 | biostudies-literature
| S-EPMC11293709 | biostudies-literature
| S-EPMC8807758 | biostudies-literature
| S-EPMC3349628 | biostudies-literature
| S-EPMC5760293 | biostudies-literature