What we don’t see shapes what we see: peripheral word semantics gates visual awareness

Poster Presentation 26.402: Saturday, May 18, 2024, 2:45 – 6:45 pm, Pavilion
Session: Multisensory Processing: Audiovisual behavior

Shao-Min (Sean) Hung1,2 (), Sotaro Taniguchi2, Akira Sarodo2, Katsumi Watanabe2; 1Waseda Institute for Advanced Study, Waseda University, 2Faculty of Science and Engineering, Waseda University

Empirical data from vision sciences indicates the linguistic constraint on our perception, particularly showing a categorical benefit from semantically constructing our visual experience. However, in the periphery, our visual acuity decreases dramatically, and extracting semantic information through word recognition becomes inevitably difficult. The current study directly contended with this issue by examining whether peripheral word semantics can influence our vision. We leveraged a peripheral sound-induced flash illusion where the number of visual flashes is often dominated by the auditory beeps delivered. In each trial, two or three Mandarin characters were flashed briefly from left to right in the periphery with number-congruent or number-incongruent beeps. We first successfully replicated the original illusions. That is, incongruent audiovisual presentations led to auditory dominance. For example, when three characters were presented together with two beeps, observers often reported perceiving only two characters. On the other hand, an additional beep induced an illusory visual percept. Crucially, we found that when the three characters formed a word, the lack of a concurrent beep (i.e., 3 characters with 2 beeps) suppressed the awareness of an existing character to a greater extent. Intriguingly, participants’ successful recognition was not crucial. A separate experiment replicated the effect with participants who were unable to recognize the words, corroborating the implicit nature of the effect. When the conventional reading direction was disrupted by reversing the presentation order, the effect disappeared. Furthermore, we adopted Japanese, a language with both logographic (kanji) and phonetic (hiragana and katakana) writing systems, and showed that this effect was specific to the logographic system. These findings demonstrate the capacity of our visual system to extract peripheral semantic information without word recognition, which in turn regulates our visual awareness.

Acknowledgements: We thank the sub-award under the Aligning Consciousness Research with US Funding Mechanisms by Templeton World Charity Foundation (TWCF: 0495) and Waseda University Grants for Special Research Projects.