Learning to Recognize Faces by How They Talk
56.337, Tuesday, 20-May, 2:45 pm - 6:45 pm, Jacaranda Hall
Dominique C. Simmons1, Josh J. Dorsi1, James W. Dias1, Theresa C. Cook1, Lawrence D. Rosenblum1; 1University of California Riverside
Seeing speech articulations can facilitate face recognition when other visual information is degraded (Lander & Chuang 2005). Furthermore, observers are able to identify familiar faces when visual information is reduced to visible articulation (Rosenblum et al., 2007). The question at hand is whether observers can learn to recognize unfamiliar faces based on visible articulatory information. We investigated this question using point-light displays of 10 articulating faces. We created point-light displays by placing fluorescent dots on each speaker’s face, mouth, teeth, tongue, and lips. Nine different point-light configurations for each speaker were used so that subjects would be unable to use point pattern information for recognition. The 10 speakers were then filmed against a black background saying the sentence, “The football game is over.” Eighteen undergraduates were first shown a single clip of each talker and told the talker’s name. During training, subjects saw 4 clips of each speaker, presented randomly, and attempted to identify the speakers by pressing a button labeled with speaker names. Subjects received immediate feedback following each trial. During the test phase, participants were presented with the remaining 4 video clips and attempted to identify the same 10 speakers without feedback. Results showed that subjects learned to recognize all of the speakers at better than chance levels, t(17) = 8.70, p <.001. Initial tests using single frames of point-light videos indicate that identification accuracy substantially declines for learning to recognize faces from static point-light images. The results suggest that observes can learn to use talker-specific articulatory movements for face recognition.