Humans can recognize objects under conditions of severe blur despite a vast loss of high spatial frequency content. Our previous work has suggested that the pervasive occurrence of blur in everyday vision may contribute to robust recognition behavior, as evidenced by the fact that convolutional neural networks (CNNs) optimized to recognize blurry objects can better predict neural responses to objects across diverse viewing conditions in both macaques and humans (Jang & Tong, VSS, 2022). Such findings led us to the following questions. Can humans further improve in their ability to recognize blurry objects with supervised training? Also, do CNNs trained on blurry objects learn robust properties that better approximate human performance, or do they learn alternative shortcut strategies? We devised a novel training protocol in which an initially blurred object gradually became less blurry over time, allowing participants to decide when they were ready to make a classification response about object identity. After the classification response, participants provided a feature saliency map of the diagnostic regions in the blurred image that contributed to their decision using a mouse pointer to demarcate the diagnostic regions. We found that participants showed significant improvement in their blur recognition thresholds after only 4 sessions of perceptual training. Moreover, CNNs optimized to recognize blurry objects provided a better match to participants’ blur recognition thresholds across individual object images. The feature saliency maps of blur-trained CNNs also appeared to lead to somewhat more holistic processing of visual objects, thereby providing a better correspondence with the spatially diagnostic regions of the object images annotated by human observers. Taken together, our study shows that human robustness to blur is highly adaptable, improves with modest training experience, and likely benefits from the frequent encounters that people have with blur in everyday life.
Acknowledgements: Supported by an NIH R01EY029278 grant to Frank Tong.