Abstract Detail

 

Talk Sessions

Poster Sessions

All Presentations

Symposia

Representational geometry measures predict categorisation speed for particular visual objects

52.23, Tuesday, 20-May, 10:45 am - 12:30 pm, Talk Room 2
Session: Object recognition: Neural mechanisms 2

Ian Charest1, Thomas A. Carlson2, Nikolaus Kriegeskorte1; 1Medical Research Council – Cognition and Brain Sciences Unit, Cambridge, UK, 2Department of Cognitive Sciences, Macquarie University, Sydney, Australia

The choice reaction time reflects the rate of accumulation of sensory evidence. For categorisation, Carlson et al. (2013) showed that an animate object can be more rapidly recognised as such when it is further away from the animate-inanimate boundary in human-inferior-temporal (hIT) representational space (as reflected in fMRI data from Kriegeskorte et al. 2008). Here, we extend these results by considering multiple category dichotomies, multiple measures of representational geometry, and multiple ventral-stream brain regions. Subjects categorised objects according to four dichotomies: animate vs. inanimate; face vs. body; human vs. animal face; natural vs. artificial inanimate object. We examined a range of representational-geometry measures, including each object’s representational centrality (i.e. its average distance to other members of its own category), its distinctness from the other category (i.e. its average distance to members of the other category), and the proportion of category peers within its local neighbourhood. We assessed whether these measures predict subjects’ reaction times for particular objects using Spearman’s rank correlation (ρ, computed within each subject and then averaged across subjects) combined with non-parametric inference and control of the false-discovery rate. For most tasks and ventral-stream brain regions, an object’s within-category representational centrality was the best predictor of reaction time among the tested representational-geometry measures. Three example results are: (1) The hIT centrality predicted reaction time for animate vs. inanimate categorisation (ρ = 0.2; p<0.0001). (2) Centrality in the fusiform face area predicted reaction time for human vs. animal face categorisation (ρ = 0.2; p <0.01). (3) Centrality in the parahippocampal place area predicted reaction time for natural vs. artificial inanimate categorisation (ρ = .29; p <0.001). Our results suggest that the representational geometry as reflected in human fMRI can explain aspects of the computational processes leading to a decision about a particular visual object.

< Back