2021 Reunion: Visual Neuroscience From Spikes to Awareness

Monday, May 24, 2021, 8:45 – 10:45 AM EDT
Tuesday, May 25, 2021, 2:30 – 4:30 PM EDT

Organizer: Arash Akbarinia, Vivian Paulun, Guido Maiello, Kate Storrs, University of Giessen

Since 2004, the European Summer School, Visual Neuroscience From Spikes to Awareness, has taught many neuroscientists with a broad background. This event aims to reunite all the former alumni and trainees by presenting a number of exciting projects triggered at the Rauischholzhausen Castle. We also encourage the participation of prospective attendees who would like to learn about this Summer School, the various opportunities it offers, and the synergistic community it fosters. Alumni from all generations are invited to present their multidisciplinary, more-or-less scientific final projects. We hope there will be at least one contribution from every year of the summer school. This could be the final fun project or anything else you come up with, such as your favorite pictures from the summer school or a ‘How It Started … How It’s Going’ of the attendees, be creative! The bottom line is to meet and catch up, so please do join us.

If you’ve got any questions, send an email to .

2021 Canadian Vision Science Social: Hosted by Vision: Science to Applications (VISTA)

Friday, May 21, 2021, 8:00 – 10:00 PM EDT

Organizers: Caitlin Mullin, VISTA; Doug Crawford, York University
Speakers: Caitlin Mullin, VISTA; Doug Crawford, York University

This social event is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! Join us for casual discussions with students and faculty from several Canadian Institutes or to just satisfy your curiosity as to why we in the North are so polite and good natured, Eh? So grab your toques and your double-double and come connect with your favourite Canucks. This year long lock down is sure to make for some great hockey hair!

VISTA is the sponsor of the Undergraduate Just-In-Time Poster sessions.

2021 Conversations on Open Science

Friday, May 21, 5:00 – 7:00 pm EDT

Organizer: VSS Student-Postdoc Advisory Committee
Moderator: Björn Jörges, York University
Speakers: Geoffrey Aguirre, Janine Bijsterbosch, Christopher Donkin, Alex Holcombe, and Russell A. Poldrack

Open Science has become an important part of the scientific landscape. Researchers are adopting open practices such as preregistrations and registered reports, open access, and the use of open source software, journals make data and code sharing more and more a desired or even required feature of research publications, and funders are increasingly evaluating the applicants’ open science track records along with their scientific proposals. It is therefore more important than ever for all scientists, and particularly for Early Career Researchers, to be able to navigate the Open Science space. For this reason, the Student Postdoc Committee has organized Conversations on Open Science as a means to introduce the VSS community to the basics of Open Science and some current debates.

Conversations on Open Science will start out with a short overview of the most important open practices. The speakers then delve deeper into two topics: preregistration and code and data sharing. We have invited two speakers for each topic: one of them argues in favor, while the other argues against, provides some nuance, or points out limitations. Both parties will first explain their respective perspectives, followed by a joint presentation in which some synthesis or common ground will be reached.

Geoffrey Aguirre

University of Pennsylvania

Geoffrey Aguirre is an Associate Professor of Neurology at the University of Pennsylvania. He has studied the human visual system using functional MRI for nearly twenty-five years, often combining brain imaging with complementary measures of perception and retinal structure. During his career he has contributed to the analytic and inferential foundation of neuroimaging studies. In recent years has worked to adopt and advocate for open-science tools, principally as a means to improve his own research. Contact Geoffrey at .

Janine Bijsterbosch

Washington University School of Medicine

Janine Bijsterbosch has worked in brain imaging since 2007. She is currently Assistant Professor in the Computational Imaging section of the Department of Radiology at Washington University in St Louis. The Personomics Lab headed by Dr. Bijsterbosch aims to understand how brain connectivity patterns differ from one person to the next, by studying the “personalized connectome”. Using big data resources such as the Human Connectome Project and UK Biobank, the Personomics Lab adopts cutting edge analysis techniques to study functional connectivity networks and their role in behavior, performance, mental health, disease risk, treatment response, and physiology. Dr. Bijsterbosch is Chair-Elect of the Open Science special interest group as part of the Organization for Human Brain Mapping. In addition, Dr. Bijsterbosch wrote a textbook on functional connectivity analyses, which was published by Oxford University Press in 2017. Contact Janine at .

Christopher Donkin

UNSW Sydney

Christopher Donkin is a cognitive psychologist at UNSW Sydney. His work tends to rely on a mix of computational modelling and experiments. He is interested in decision-making, memory, models, and metascience. While agreeing that open science is of utmost importance, many long series of conversations with Aba Szollosi about how knowledge is created has led to disagreement around the purported benefits of preregistration. Though the content of the talk will be specific to preregistration, the background knowledge underlying these arguments is more carefully laid out here.  Contact Chris at .

Alex Holcombe

University of Sydney

Alex Holcombe studies how humans perceive and process visual signals over time, in domains such as motion, position perception, and attentional tracking. Outside of the lab, he has been active in various open science initiatives. He is an associate editor at the journal Meta-psychology and he co-founded the Registered Replication Report article format at Perspectives on Psychological Science in 2014, co-founded the Association for Psychological Science journal Advances in Methods and Practices in Psychological Science in 2018, and served on the founding advisory boards of the preprint server PsyArxiv and the journal PLOS ONE. Contact Alex at .

Russell A. Poldrack

Stanford University

Russell A. Poldrack is the Albert Ray Lang Professor in the Department of Psychology and Professor (by courtesy) of Computer Science at Stanford University, and Director of the Stanford Center for Reproducible Neuroscience. His research uses neuroimaging to understand the brain systems underlying decision making and executive function. His lab is also engaged in the development of neuroinformatics tools to help improve the reproducibility and transparency of neuroscience, including the Openneuro.org and Neurovault.org data sharing projects and the Cognitive Atlas ontology. Contact Russ at .

Björn Jörges

York University

Björn Jörges studies the role of prediction for visual perception, as well as visuo-vestibular integration for the perception of object motion and self-motion. Beyond these topics, he also aspires to make science better, i.e., more diverse, more transparent and more robust. After finishing his PhD in Barcelona on the role of a strong earth gravity prior for perception and action, he started a Postdoc in the Multisensory Integration Lab at York University, where he currently investigates how the perception of self-motion changes in response to microgravity. Contact Björn at .

2021 phiVIS: Philosophy of Vision Science Workshop

Sunday, May 23, 2021, 3:30 – 5:30 PM EDT

Organizers: Kevin Lande, York University; Chaz Firestone, Johns Hopkins University
Speakers: Ned Block, Silver Professor of Philosophy, Psychology and Neural Science, NYU; Jessie Munton, Lecturer in Philosophy, University of Cambridge; E.J. Green, Assistant Professor and Class of 1948 Career Development Chair in the Department of Linguistics and Philosophy, MIT;  and a slate of invited vision scientists who will facilitate the discussion.

The past decade has seen a resurgence of interest in the intersection between vision science and the philosophy of perception. But opportunities for conversation between vision scientists and philosophers are still hard to come by. The phiVIS workshop is a forum for promoting and expanding this interdisciplinary dialogue. Philosophers of perception can capitalize on the experimental knowledge of working vision scientists, while vision scientists can take advantage of the opportunity to connect their research to long-standing philosophical questions. Short talks by philosophers of perception that engage with the latest research in vision science will be followed by discussion with a slate of vision scientists, on topics such as probabilistic representation in perception, perceptual constancy, amodal completion, multisensory perception, visual adaptation, and much else. This event is supported by York University’s Vision: Science to Applications (VISTA) program and Centre for Vision Research, as well as the Johns Hopkins University Vision Sciences Group.

To register and to learn more about our speakers and our mission, visit: www.phivis.org.

2021 Virtual VPixx Hardware with the LabMaestro Simulator

Tuesday, May 25, 2021, 12:00 – 1:00 PM EDT

Organizers: Dr. Lindsey Fraser, VPixx Technologies; Dr. Sophie Kenny, VPixx Technologies
Speaker: Dr. Lindsey Fraser, VPixx Technologies

Over the past year, VPixx Technologies has developed the LabMaestro Simulator, a software tool that simulates VPixx’s data acquisition hardware. The Simulator can record button presses from a virtual button box, simulate incoming triggers and analog signals to the virtual data acquisition system, and mimic timestamps for a virtual display. The LabMaestro Simulator allows researchers to develop and test experimental protocols without a connection to in-demand hardware or limited-access research sites, such as MRI suites. Little to no modification of code is required to switch between virtual and physical VPixx devices.

The goal of this satellite is to introduce the LabMaestro Simulator and provide an overview of its functionality. We will start with a review of the register-based architecture shared by all of our hardware, and the benefits this architecture offers for signal timing and synchronization. Principles such as writing to hardware registers, as well as locking triggers and data acquisition to visual events, will be discussed. We will show how the simulator replicates this architecture via a virtual server, and highlight the differences between the behaviour of virtual and physical devices, where such differences exist.

The satellite will end with a demonstration of some of the utilities available through our different licensing options. VPixx staff scientists will be available for questions about the Simulator at the end of the satellite, and throughout the remainder of the conference.

We look forward to seeing you there!

2021 Run MATLAB/Psychtoolbox Experiments Online with Pack & Go

Friday, May 21, 2021, 4:00 – 5:00 PM EDT
Sunday, May 23, 2021, 8:00 – 9:00 AM EDT

Organizers: Dr. Sophie Kenny, VPixx Technologies; Dr. Lindsey Fraser, VPixx Technologies
Moderator: Dr. Lindsey Fraser, Staff Scientist at VPixx Technologies
Speaker: Dr. Sophie Kenny, Staff Scientist at VPixx Technologies

Pack&Go is a remote experiment testing and data collection solution under development by VPixx Technologies. Pack&Go runs MATLAB/Psychtoolbox experiments developed by the vision and psychology research communities. The Pack&Go solution provides a high-performance computer architecture for executing Psychtoolbox code remotely. A vetted participant equipped with the correct links and credentials can access the experiment online and stream it to their browser on demand. The participant’s technological requirements are relatively low: the participant will not need to download files to their device or meet specific hardware requirements aside from having a stable internet connection. Pack&Go records data files generated during the execution of the scripts programmed by the researcher, including formats such as .csv and .mat. The data files are stored on a secure server alongside anonymized participant information and information about the network’s quality during the data collection session. When one or more participants have completed the online study, the experiment manager can download the data locally and analyze it, much in the same way as if the researcher had run the experiment on a local computer.
VPixx Technologies has worked since 2001 developing innovative hardware and software solutions to meet the needs of vision scientists and the extended research community. Pack&Go’s development emerged from our long tradition of developing products based on continuous discussions with our customers and in conjunction with early-adopting labs willing to serve as guides for our development. Work on Pack&Go began in 2020 in collaboration with Dr. Caroline Blais and Dr. Daniel Fiset from the University of Quebec in Outaouais (UQO).
With Pack&Go, VPixx Technologies will enable researchers who use Psychtoolbox to retain the ability to design their complex experiments and stimuli and run them online, maintaining similarity with the experiments they usually run in their laboratories.
The satellite session’s objective is to demonstrate the project’s current state in a live demo and obtain early feedback from the community. To help us plan this event, please send an email signalling your interest to . We hope to see you at the satellite session!

2021 An introduction to TELLab 2.0 – A new-and-improved version of The Experiential Learning LABoratory, a web-based platform for educators

Monday, May 24, 2021, 8:00 – 9:00 PM EDT
Wednesday, May 26, 2021, 2:30 – 3:30 PM EDT

Organizers: Jeff Mulligan, Independent contractor to UC Berkeley; Jeremy Wilmer, Wellesley College
Speakers: Ken Nakayama, Jeremy Wilmer, Justin Junge, Jeff Mulligan, Sarah Kerns

This satellite event will provide a tutorial overview of the new-and-improved version of The Experiential Learning Lab (TELLab2), a web-based system that allows students to create and run their own psychology experiments, either by copying and modifying one of the existing experiments, or creating a new one entirely from scratch.  The TELLab project was begun a number of years ago by Ken Nakayama and others at Harvard University, and continues today under Ken’s leadership from his new position as adjunct professor at UC Berkeley.  TELLab2 is still in development, but is targeted to be ready for production use in fall classes this year.  This satellite will give a sneak preview of some of the new features not available in the original TELLab, and provide an opportunity for the potential user community to request the additional features that would be most useful in their own teaching.

After a short introduction, TELLab2 gurus will provide a live demonstration of some of the new capabilities.  Complete details can be found on TELLab’s satellite information website:  http://vss.tellab.org.  Potential attendees are welcome to visit the beta version of the site at http://lab2.tellab.org, with the caveat that the site is still in flux and not all of the advertised features are fully-functional as of this writing.

Hope to see you there.  Happy experimenting!

V-VSS 2021 Graphics Competition Winner

Each year VSS solicits its membership to submit creative visual images related to the field of vision science, the Society, or the VSS meeting. Traditionally, the winning images are featured on the program, abstracts book, signage, and t-shirts. Due to the online format this year, the winning image appears as the banner throughout the VSS 2021 website.

The Vision Sciences Society is pleased to recognize Susanne Stoll as the winner of the V-VSS 2021 Graphics Competition. Her image, shown above and below, is entitled Global Vision.
 

Global Vision

Beauty is in the eye of the beholder and so is the interpretation of Global Vision. However, as with most things in life, there is no end product without a mission. As such, Global Vision attempts to unify three facets of this year’s VSS meeting.

The first facet relates to what we are all striving for, namely understanding vision and how we perceive the ever-changing world around us visually. The second facet is meant to reflect the increased accessibility of this year’s gathering due to its virtual nature, with us being distributed all over the globe. The third facet relates to the multi-focal character of the VSS and thus its broad scope, bringing together expertise from various subdomains, including visual psychophysics, visual neuroscience, computational vision, visual cognition, and bordering fields.

Global Vision attempts to feature these facets by dynamically projecting a map of the world onto the right eye of an unknown other standing right in front of you. A static circular searchlight takes snapshots of the map, generating a globe as much as the right iris of the unknown other. The different snapshots can be interpreted to echo an ever-changing world, the different regions the VSS schedules events in (broadly) as well as the subdisciplines the VSS unites. The circular VSS logo hosts a pupil and is thought to represent the left iris. The wavy lines (or sinusoids) demarcate the overall shape of the right and left eye, but can also be seen as a decorative element encapsulating the different facets.

By looking you right in the eyes, Global Vision is also meant to ask you quite candidly what your global vision is.

Special thanks go to my colleagues and friends in London and Auckland, the Board of Directors, and the VSS organization team for providing constructive feedback on my design idea. I wish everybody a superb and insightful V-VSS 2021.

About Susanne Stoll

Susanne Stoll completed her undergraduate studies in Psychology at the University of Tübingen, followed by an MSc in Mind and Brain at Humboldt University of Berlin. Currently, she is a final year PhD student under the supervision of Dr. Sam Schwarzkopf and Dr. John Greenwood at University College London. Her research uses functional magnetic resonance imaging and population receptive field (pRF) modeling to investigate how perceptual grouping and spatial attention modulate the visual brain’s representation of visual information. Susanne also has a keen interest in relating pRF properties to behavior as well as counteracting regression fallacies and probing the validity of analysis procedures in visual neuroimaging and beyond.

2021 Young Investigator Award – Martina Poletti

The Vision Sciences Society is honored to present Martina Poletti with the 2021 Elsevier/VSS Young Investigator Award.

The Young Investigator Award is an award given to an early-stage researcher who has already made a significant contribution to our field. The award is sponsored by Elsevier, and the awardee is invited to submit a review paper to Vision Research highlighting this contribution.

Martina Poletti

Assistant Professor
Department of Brain and Cognitive Sciences
University of Rochester

The 2021 Elsevier/VSS Young Investigator Award goes to Dr. Martina Poletti for fundamental contributions to our understanding of eye movements, microsaccades, and the nature of visual-motor function and attention within the foveola. Dr. Poletti is an Assistant Professor in the Department of Brain and Cognitive Sciences at the University of Rochester. She received her Bachelor’s degree and Master’s degree at the University of Padova and completed her doctoral and postdoctoral work at Boston University.

Dr. Poletti’s research addresses core questions regarding the interplay of attention and eye movements at the foveal scale. Her scholarly contributions will help revise textbook descriptions of the central fovea as a region of uniformly high acuity and microsaccades as involuntary eye movements, which purpose is to merely refresh the retinal image during fixation. Dr. Poletti’s experiments have capitalized on high-resolution eye tracking and gaze-contingent display to demonstrate that microsaccades are not random but purposeful, serving to bring task-relevant items to the preferred region within the foveola. Her work has revealed that fine spatial vision within the 1-deg foveola is non-uniform and it is selectively modulated by attention. Within this microcosm of visual space, covert and overt shifts of attention can still be observed operating with a remarkably high-precision, and guiding microsaccades in an active exploration of details. Dr. Poletti’s research exemplifies creative experimentation, cutting-edge methodology, and rigorous evaluation of longstanding theories in vision science.

Elsevier/Vision Research Article

The interplay of attention and eye movements at the foveal scale

Dr. Poletti will speak during the Awards session,
Sunday, May 23, 2021, 2:30 – 3:30 pm EDT.

Human vision relies on a tiny region of the retina, the foveola, to achieve high spatial resolution. Foveal vision is of paramount importance in daily activities, yet its study is challenging, as eye movements incessantly displace stimuli across this region. Building on recent advances in eye-tracking and gaze-contingent display, we have examined how attention and eye movements operate at the foveal level. We have shown that exploration of fine spatial detail unfolds following visuomotor strategies reminiscent of those occurring at larger scales. Together with highly precise control of attention, this motor activity is linked to non-homogenous processing within the foveola and selectively modulates sensitivity both in space and time. Therefore, high acuity vision is not the mere consequence of placing a stimulus at the center of gaze: it is the outcome of a synergy of motor, cognitive, and attentional processes, all finely tuned and dynamically orchestrated.

2021 Sponsors

The Vision Sciences Society thanks the following sponsors for their support of our 2021 meeting.

 

Awards Sponsor

Elsevier/Vision Research

Elsevier is proud to sponsor the 2021 Young Investigator Award and the V-VSS 2021 Elsevier/Vision Research Travel Awards.

Elsevier is a global information analytics business that helps institutions and professionals advance healthcare, open science and improve performance for the benefit of humanity.

We help researchers make new discoveries, collaborate with their colleagues, and give them the knowledge they need to find funding. We help governments and universities evaluate and improve their research strategies. We help doctors save lives, providing insight for physicians to find the right clinical answers, and we support nurses and other healthcare professionals throughout their careers. Our goal is to expand the boundaries of knowledge for the benefit of humanity.

 

Gold Sponsor

VPixx Technologies

VPixx Technologies welcomes the vision science community to V-VSS 2021.  This year VPixx celebrates our 20th anniversary, and we are marking this special occasion with the launch of two new tools for your research: the LabMaestro Pack&Go Remote Testing Tool and the LabMaestro Hardware Simulator.

Over the past year, the need for remote data collection platforms has become clear. The VPixx team has created LabMaestro Pack&Go, a tool for remote data collection for MATLAB/Psychtoolbox-based experiment protocols.  With Pack&Go researchers can deploy MATLAB/Psychtoolbox experiments to remote participants on a local or global scale, while monitoring communication performance to ensure data quality. This tool allows researchers to test participants using the subject’s own personal computer, with no MATLAB/Psychtoolbox installation required. Consult the VSS Satellite Events if you would like to learn more, or to participate in your first Psychtoolbox Pack&Go experiment!

VPixx Technologies is known for our innovative hardware for vision research.  The PROPixx DLP LED video projector, supporting refresh rates up to 1440Hz, has become a standard for neuroimaging, neurophysiology, and behavioral vision research applications.  The TRACKPixx3 2kHz binocular eye tracker and the DATAPixx I/O hub offer microsecond-precise data acquisition synchronized to stimulus presentation.  This year we launch the LabMaestro Hardware Simulator, a software tool that simulates VPixx hardware, allowing researchers to develop and test experiment protocols while the physical instruments are unavailable or in use. Consult the VSS Satellite Events if you would like to learn more!

Peter April, Jean-Francois Hamelin, and the entire VPixx Team wish you well.

Vision: Science to Applications (VISTA)

Vision: Science to Applications (VISTA) is a collaborative program funded by the Canada First Research Excellence Fund (CFREF). VISTA’s central ‘vision’ is to create a novel transdisciplinary program that expands and integrates York University’s unique strengths in biological and computational vision and translates this research into real-world applications. Our interdisciplinary approach, spanning visual neuroscience to computer vision and beyond will create impact through strategic collaboration with our partners from around the globe. VISTA also provides important graduate, post-doctoral, and researcher funding opportunities to enable cutting-edge research, and will create knowledge and technologies that will help people live healthier, safer, and more productive lives. You can learn more about VISTA’s highlights and accomplishments over the past 3.5 years on this soft copy version of our Impact Report.

 

Silver Sponsors

Rogue Research Inc.

Rogue Research has been your partner in neuroscience research for over 20 years. As developers of the Brainsight® family of neuronavigation systems for non-invasive brain stimulation, we have helped make transcranial magnetic stimulation more accurate and more reproducible while keeping it simple and effective. 20 years and 600 laboratories later, Brainsight® continues to evolve to meet the needs in non-invasive brain stimulation and has expanded into functional brain imaging. Brainsight NIRS combines the power of neuronavigation to ensure accurate placement of NIRS optodes with our NIRS hardware that incorporates low-profile, TMS, MRI and MEG compatible.

Rogue Research has expanded beyond navigation to develop our own, next-generation, TMS device: Elevate™ TMS. Elevate™ TMS offers control over the pulse shape to ensure more reproducible excitatory or inhibitory effects on the targeted network. While Brainsight® ensures accurate targeting and Elevate™ TMS ensures reliable circuit interaction, Rogue Research is actively developing a robotic positioner to ensure that the plan is accurately and efficiently carried out. The unique design will ensure reachability and simplicity.

Rogue Research also offers our Brainsight® Vet line of neurosurgical navigation tools including our microsurgical robot. We also offer custom and MRI compatible implants, a line of MRI coils and testing chairs.

SR Research Ltd.

SR Research produces the EyeLink family of high-speed eye trackers and has been enabling scientists to perform cutting-edge research since the early 1990s. EyeLink systems are renowned for their outstanding technical specifications, temporal precision, and superb accuracy. The EyeLink 1000 Plus has the world’s lowest spatial noise and can be used in the laboratory and in EEG/MEG/MRI environments. The EyeLink Portable Duo offers the same high levels of data quality in a small, portable package. SR Research also provides sophisticated experiment delivery and analysis software, and a truly legendary support service.

Qualcomm

Qualcomm is the world’s leading wireless technology innovator and the driving force behind the development, launch, and expansion of 5G. When we connected the phone to the internet, the mobile revolution was born. Today, our foundational technologies enable the mobile ecosystem and are found in every 3G, 4G and 5G smartphone. We bring the benefits of mobile to new industries, including automotive, the internet of things, and computing, and are leading the way to a world where everything and everyone can communicate and interact seamlessly.

WorldViz VR

For 20 years, WorldViz VR has helped over 1500 universities, businesses and government organizations to conduct leading edge research with Virtual Reality.

Over the years, WorldViz VR has developed Vizard, a python-based platform that enables users to rapidly build 3D virtual reality applications that solve real world business and research challenges.

At VSS 2021, WorldViz will present for the first time a fully GUI based tool that allows users to collect, review and analyze eye tracking data with support for all the major PC based VR eye tracking devices including the new StarVR One, Vive Pro Eye, Pupil Labs and Tobii VR. It will allow drag and drop adding of videos and 3D models, and many of the most used analytics methods are included into the provided templates.

Build a scene, run your experiment and review in minutes. Fully expandable and modifiable by using the GUI configurator or python code.

The WorldViz components allow integration of highly targeted VR labs, and we are happy to help customers configure their own labs, tailored to their specific needs.

Eyeware

Eyeware is a Swiss computer vision company developing eye tracking software for consumer-grade, depth sensing cameras. Our innovative 3D eye tracking enables real-world interactions, capturing user attention, intention, and interest. Eyeware`s technology can be easily adapted and integrated into a large variety of applications such as academic research, robotics, human-machine interaction, etc. Academic researchers can collect robust, accurate, and efficient attention data via our Python API or CSV export to understand how participants observe and respond to changes in their environment.

See an overview video of the GazeSense App here or information about our SDK. More information can be found on our website.

 

Bronze Sponsors

Brain Vision LLC

Brain Vision is the leading team for EEG in Vision Science. We offer full integration of EEG with many leading eye-tracking and video systems we also provide flexible and robust solutions for both stationary and mobile EEG. All of our systems are available with a variety of electrode types such as saline-sponge nets, active gel, passive, and dry electrodes, which are easily expandable with bio-sensors like GSR, ECG, Respiration, and EMG. Our team is specialized in using EEG with other modalities such as fMRI, fNIRS, MEG, TMS, and tDCS/HDtDCS.

If you want to know how EEG and Vision Science improve each other, please feel free to contact us:

Let us help you push the edge of what research is possible!

Vision Sciences Society