Comparing Human Object Learning with Deep Neural Networks

Poster Presentation 56.405: Tuesday, May 23, 2023, 2:45 – 6:45 pm, Pavilion
Session: Object Recognition: Categories

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2023 to view it.
Please go to your Account Home page to register.

Yinuo Peng1, Zhen Zhu1, Derek Hoiem1, Ranxiao Frances Wang1; 1University of Illinois at Urbana-Champaign

One challenge in machine learning is that networks readily forget previously learned information as they encounter new information. For example, a network pre-trained with ImageNet can acquire useful features for classification tasks. However, training it with more images from other sources to further improve its performance or perform new tasks will lead to catastrophic forgetting of the previously learned tasks. This study examined whether this type of catastrophic forgetting is universal for human and machine category learning by testing them under exactly the same conditions. Human participants and a pre-trained convolutional neural network (CNN) learned some images of novel objects called greebles under mixed-batch, by-category, and full-set conditions. In the mixed-batch condition, they learned the training images by batches, with each block being a mixture of greebles from different classes. In the by-category condition, each block had most greebles from the same class. The full-set condition presents all images in one block. Human participants learned classification by judging/guessing class labels and receiving feedback. The CNN model was trained with exactly the same images and conditions with the same number of iterations. After learning, they were both tested on the learned greebles, novel greebles with the same viewpoint, and novel greebles with a different viewpoint. Performances are evaluated by classification accuracy during testing. Our results showed that humans do not have catastrophic forgetting like CNNs. There were no significant effects of the learning scheme or testing item type. In contrast, the CNN showed significant impairments in the by-category condition (catastrophic forgetting) and in novel objects and novel viewpoints. This study indicates that humans, compared with machines, might have different mechanisms in object category learning to avoid catastrophic forgetting, providing potential directions for continual learning in neural networks.