2022 Computational and Mathematical Models in Vision (MODVIS)

Thursday, May 12, 2022, 9:00 am – 6:30 pm EDT, Horizons
Friday, May 13, 2022, 9:00 – 11:30 am EDT, Horizons

Organizers: Jeff Mulligan, Freelance Vision Scientist; Zygmunt Pizlo, UC Irvine, Anne B. Sereno, Purdue University; Qasim Zaidi, SUNY College of Optometry

Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington

A keynote address will be given by George Sperling, Distinguished Professor, University of California, Irvine.

More information about the workshop, including how to register, can be found at the workshop website https://www.purdue.edu/conferences/events/modvis/.  The registration fees are $140 (regular) and $70 (student), which cover audio-visual expenses, coffee and snacks, and the VSS satellite fee.

The workshop features contributed presentations that are longer than standard VSS talks, with interactive discussion. Contributions are solicited on all aspects of modeling and simulation.

2021 Satellite Events

2021 Satellite Events

An introduction to TELLab – The Experiential Learning LABoratory, a web-based platform for educators

An introduction to TELLab 2.0 – A new-and-improved version of The Experiential Learning LABoratory, a webbased platform for educators

Canadian Vision Science Social

Measuring and Maximizing Eye Tracking Data Quality with EyeLinks

Mentoring Envisioned

New Tools for Conducting Eye Tracking Research

Performing Eye Tracking Studies in VR

phiVIS: Philosophy of Vision Science Workshop

Reunion: Visual Neuroscience From Spikes to Awareness

Run MATLAB/Psychtoolbox Experiments Online with
Pack & Go

Teaching Vision

Virtual VPixx Hardware with the LabMaestro Simulator

Visibility: A Gathering of LGBTQ+ Vision Scientists and
Friends

2021 An introduction to TELLab – The Experiential Learning LABoratory, a web-based platform for educators

Saturday, May 22, 2021, 8:00 – 9:00 AM EDT

Organizers: Jeff Mulligan, Independent contractor to UC Berkeley; Jeremy Wilmer, Wellesley College
Speakers: Ken Nakayama, Jeremy Wilmer, Justin Junge, Jeff Mulligan, Sarah Kerns

This satellite event will provide a tutorial overview of The Experiential Learning Lab (TELLab), a web-based system that allows students to create and run their own psychology experiments, either by copying and modifying one of the many existing experiments, or creating a new one entirely from scratch.  The TELLab project was begun a number of years ago by Ken Nakayama and others at Harvard University, and continues today under Ken’s leadership from his new position as adjunct professor at UC Berkeley.  To date, TELLab has been used by around 20 instructors and 5000 students.

After a short introduction, TELLab gurus will demonstrate the process of creating and running an experiment, exporting the data and analyzing the results.  Complete details can be found on TELLab’s satellite information website:  http://vss.tellab.org.  Potential attendees are encouraged to visit the site at http://lab.tellab.org beforehand to create their own account and explore the system on their own.

Hope to see you there.  Happy experimenting!

2021 Performing Eye Tracking Studies in VR

Tuesday, May 25, 2021, 9:15 – 10:15 AM EDT
Tuesday, May 25, 2021, 5:15 – 6:15 PM EDT

Organizers: Belle Lin, WorldViz VR; Matthias Pusch, WorldViz VR
Speakers: Sado Rabaudi, Dan Tinkham, Matthias Pusch, Andrew Beall

WorldViz VR will teach participants how to set up and perform eye tracking studies in VR using Python and a GUI based configurator. We will explain drag and drop methods for adding 360 videos and 3D models, and demonstrate analytics methods with associated templates. At the end of this session participants will know how to insert their own 3D geometry or 360 video in VR scenes, generate 3D visualizations of the scene and gaze path, extract gaze intersects, view an interactive session replay, save out raw data, and modify the template using their own target objects and parameters. 

The presentation and teaching will be provided as a remote meeting with screen-sharing. A live camera view will allow participants to observe the eye tracker setup and operation for several leading eye tracked VR headsets.

2021 Canadian Vision Science Social: Hosted by Vision: Science to Applications (VISTA)

Friday, May 21, 2021, 8:00 – 10:00 PM EDT

Organizers: Caitlin Mullin, VISTA; Doug Crawford, York University
Speakers: Caitlin Mullin, VISTA; Doug Crawford, York University

This social event is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! Join us for casual discussions with students and faculty from several Canadian Institutes or to just satisfy your curiosity as to why we in the North are so polite and good natured, Eh? So grab your toques and your double-double and come connect with your favourite Canucks. This year long lock down is sure to make for some great hockey hair!

VISTA is the sponsor of the Undergraduate Just-In-Time Poster sessions.

2021 An introduction to TELLab 2.0 – A new-and-improved version of The Experiential Learning LABoratory, a web-based platform for educators

Monday, May 24, 2021, 8:00 – 9:00 PM EDT
Wednesday, May 26, 2021, 2:30 – 3:30 PM EDT

Organizers: Jeff Mulligan, Independent contractor to UC Berkeley; Jeremy Wilmer, Wellesley College
Speakers: Ken Nakayama, Jeremy Wilmer, Justin Junge, Jeff Mulligan, Sarah Kerns

This satellite event will provide a tutorial overview of the new-and-improved version of The Experiential Learning Lab (TELLab2), a web-based system that allows students to create and run their own psychology experiments, either by copying and modifying one of the existing experiments, or creating a new one entirely from scratch.  The TELLab project was begun a number of years ago by Ken Nakayama and others at Harvard University, and continues today under Ken’s leadership from his new position as adjunct professor at UC Berkeley.  TELLab2 is still in development, but is targeted to be ready for production use in fall classes this year.  This satellite will give a sneak preview of some of the new features not available in the original TELLab, and provide an opportunity for the potential user community to request the additional features that would be most useful in their own teaching.

After a short introduction, TELLab2 gurus will provide a live demonstration of some of the new capabilities.  Complete details can be found on TELLab’s satellite information website:  http://vss.tellab.org.  Potential attendees are welcome to visit the beta version of the site at http://lab2.tellab.org, with the caveat that the site is still in flux and not all of the advertised features are fully-functional as of this writing.

Hope to see you there.  Happy experimenting!

2019 Satellite Events

Wednesday, May 15

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 15 – Friday, May 17, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
8:30 – 11:45 am, Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zygmunt Pizlo, UC Irvine; Anne B. Sereno, Purdue University; and Qasim Zaidi, SUNY College of Optometry

Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington

The 8th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the Tradewinds Island Resorts in St. Pete Beach, FL, May 15 – May 17.

A keynote address will be given by Dr. Yanxi Liu, Penn State University.

The early registration fee is $100 for regular participants, $50 for students. After March 31st, the registration fee will increase to $120 (regular) and $60 (student).

Friday, May 17

Improving the precision of timing-critical research with visual displays

Friday, May 17, 9:00 – 11:00 am, Jasmine/Palm

Organizers: Sophie Kenny, VPixx Technologies; Peter April, VPixx Technologies

VPixx Technologies is a privately held company serving the vision research community by developing innovative hardware and software tools for vision scientists (www.vpixx.com).

Visual display and computer technologies have improved on many fronts over the years; however, impressive technical specifications of devices mask the fact that timing of concurrent events is not typically controlled with a high degree of precision. This is a problem for scientists whose research relies on synchronization of external recording equipment relative to the onset of a visual stimulus. During this workshop, we will demonstrate the use of hardware solutions to improve upon these issues. We will first describe the principle behind these hardware solutions. We will then showcase how experiments can be programmed to control the triggering of external devices, to play audio signals, and to record digital, analog and audio signals, all synchronized with microsecond accuracy to screen refresh.

To help us plan this event, please send an email signalling your interest to:

Psychophysics Toolbox Forum

Friday, May 17, 11:00 – 11:45 am, Jasmine/Palm

Organizer: Vijay Iyer, MathWorks

Forum for researchers, vendors, and others who work with the Psychophysics Toolbox (PTB) widely used for visual stimulus generation in vision science. MathWorks is pleased to support the PTB’s ongoing development, which is now hosted at the Medical Innovations Incubator (MII) in Tuebingen. A consortium led by industry is emerging to support the PTB project. Join to learn more about the new arrangement and to provide your input on future directions for PTB.

Saturday, May 18

Large-scale datasets in visual neuroscience

Saturday, May 18, 8:30 – 10:30 pm, Jasmine/Palm

Organizers: Elissa Aminoff, Fordham University; John Pyles, Carnegie Mellon University

Speakers: Elissa Aminoff, Fordham University; Kendrick Kay, University of Minnesota; John Pyles, Carnegie Mellon University; Michael Tarr, Carnegie Mellon University

The future of vision science lends itself more and more to using large real-world image datasets (n > 1,000) to study and understand the neural and functional mechanisms underlying vision. As the size of such datasets (and the resulting data) increases, there are commensurate challenges to effectively and successfully collect, distribute, and analyze large-scale data. If you are interested in discussing these challenges, please join us.

The format of this event will be brief presentations by researchers who have recently collected or analyzed large fMRI datasets, followed by an open discussion.

Sunday, May 19

FoVea (Females of Vision et al) Workshop

Sunday, May 19, 7:30 – 9:00 pm, Horizons

Organizers: Diane Beck, University of Illinois, Urbana-Champaign; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, Baycrest Health Sciences

Panel Discussion on Navigating a Life in Science as a Woman
Panel Discussants: Lynne Kiorpes (New York University), Ruth Rosenholtz (MIT), Preeti Verghese (Smith-Kettlewell Eye Research Institute), Emily Ward (University of Wisconsin – Madison)

The panel will begin by addressing issues they consider important/informative and then address questions.

FoVea is a group founded to advance the visibility, impact, and success of women in vision science (www.foveavision.org). We encourage vision scientists of all genders to participate in the workshops.

Please register at: http://www.foveavision.org/vss-workshops 

Monday, May 20

Aesthetics Social

Monday, May 20, 2:00 – 3:30 pm, Sabal/Sawgrass

Organizers: Edward Vessel, Max Planck Institute for Empirical Aesthetics; Karen Schloss, University Wisconsin-Madison; Aenne Brielmann (New York University); Ilkay Isik (MPIEA); Dominik Welke (MPIEA)

Our lives are full of aesthetic experiences. When we look at art, people surrounding us, or views out of the window, we cannot help but assess how much the sight pleases us. This social meeting brings together researchers interested in understanding such aesthetic responses. We will highlight aesthetics research being presented at VSS in a “Data Blitz” session, followed by an open discussion and time to socialize. Light refreshments will be offered.

Data Blitz presentations are open to anyone presenting aesthetics-related work at VSS. Selection for presentation will be made by the organizing committee based on scientific rigor, potential impact and interest, academic position (preference given to students/early stage researchers), and whether your work was selected for a talk or poster at VSS (priority given to posters).

If you are interested in presenting your findings at the Data Blitz session please send an email to  (ATTN: Aesthetics Social Data Blitz) by April 5, 2019 with the following information:

  • Presenter name, affiliation, and academic status (student/postdoc/PI/etc.)
  • Presenter contact information (email, phone)
  • Presentation title and abstract
  • Date/time and type of VSS presentation (poster/talk)

This event is sponsored by the International Association of Empirical Aesthetics (IAEA; https://www.science-of-aesthetics.org) and the Max Planck Institute for Empirical Aesthetics (MPIEA; https://www.aesthetics.mpg.de/en.html).

A hands-on crash course in reproducible mixed-effects modeling

Monday, May 20, 2:00 – 4:00 pm, Glades

Organizer: Dejan Draschkow, Department of Psychology, Goethe University Frankfurt; Department of Psychiatry, University of Oxford

Mixed-effects models are a powerful alternative to traditional F1/F2-mixed model/repeated-measure ANOVAs and multiple regressions. Mixed models allow simultaneous estimation of between-subject and between-stimulus variance, deal well with missing data, allow for easy inclusion of covariates and modelling of higher order polynomials. This workshop provides a focused, hands-on and state of the art treatment of applying this analysis technique in an open and reproducible way. We will provide a fully documented R pipeline, solutions for power analysis and will discuss common pitfalls and unresolved issues. It is suitable for 1) “concept attendance” – you want to be able to evaluate potential issues when reviewing a paper; 2) “implementation attendance” – strong theoretical background, low practical experience; 3) “switch attendance” – you are coming from another language or software and want to switch to R; 4) “transition attendance” – you are quite experienced in traditional analysis procedures and want to see what this is all about and 5) “refreshing attendance” – you just want to check if there are any new developments. It might not be suitable for participants with zero experience in statistics and programming and too boring for participants who perform simulation-based power analysis for mixed models or use a PCA to diagnose overfitting problems. This event is funded by a WikiMedia Open Science grant dedicated to https://smobsc.readthedocs.io/en/latest/.

No registration required. First come, first served, until full. For questions or more information, please visit my website at https://www.draschkow.com/.

WorldViz VR/AR Workshop: Virtual Reality Displays Break New Ground for Research Purposes

Monday, May 20, 2:00 – 4:00 pm, Jasmine/Palm

Organizers: Matthias Pusch, WorldViz; Lucero Rabaudi, WorldViz

Beyond the wave of consumer virtual reality displays is a new lineup of professional products that are capable of generating a new class of visual stimulus that can be used by scientists. We will show two examples of what we consider most exciting for the VSS community. The first is a multi-resolution HMD that is capable of nearly 60 cycles-per-degree over a large center field of the display which then feathers to more typical HMD resolution toward the periphery. The second is a low-latency high-resolution video-see-thru technology that converts a consumer class HMD into a sophisticated augmented reality system that can be used to combine real near field objects (e.g., one’s hands or tools) with computer graphics imagery.

In this Satellite session, we will present these technologies in action with examples of how researchers can use them in practice. There will be a technical portion of the session detailing the technologies benefits and limitations, as well as a hands-on portion for attendees to try the technologies live.

VISxVISION Workshop: Novel Vision Science Research Directions in Visualization

Monday, May 20, 2:00 – 4:00 pm, Royal Tern

Organizers: Cindy Xiong, Northwestern University; Zoya Bylinskii, Adobe Research; Madison Elliott, University of British Columbia; Christie Nothelfer, Nielsen; Danielle Szafir, University of Colorado Boulder

Interdisciplinary work across vision science and data visualization has provided a new lens to advance our understanding of the capabilities and mechanisms of the visual system while simultaneously improving the ways we visualize data. Vision scientists can gain important insights about human perception by studying how people interact with visualized data. Vision science topics, including visual search, ensemble coding, multiple object tracking, color and shape perception, pattern recognition, and saliency, map directly to challenges encountered in visualization research.

VISxVISION (www.visxvision.com) is an initiative to encourage communication and collaboration between researchers from the vision science and the data visualization research communities. Building on the growing interest on this topic and the discussions inspired by our symposium last year “Vision and Visualization: Inspiring novel research directions in vision science,” this workshop aims to provide a platform to bring together vision science and visualization researchers to share cutting-edge research at this interdisciplinary intersection. We also encourage researchers to share vision science projects that have the potential to be applied to topics in data visualization.

This year’s workshop will consist of a series of lightning talks, followed by a Q&A session with the presenters. Attendees will then learn about conference and publication opportunities in this field: Brian Fisher will review the IEEE Vis conference and benefits of collaborating within data visualization, and Editors from the Journal of Vision’s upcoming special visualization edition will discuss publishing in this area. The workshop will conclude with a “meet & mingle” session with refreshments, intended to encourage more informal discussion among participants and to inspire interdisciplinary collaboration.

This event is being sponsored by Adobe Inc., the Visual Thinking Lab at Northwestern, and Colorado Boulder’s VisuaLab.

A call for abstracts on https://visxvision.com will solicit recent, relevant research at the intersection of vision science and visualization, or vision science project proposals that have the potential to be applied to topics in data visualization (deadline: April 8).  The top submissions will be selected for presentation as lightning talks at the workshop (notification: April 15). Submit your abstract here: http://bit.ly/2019abstract

Please register for the event at: http://bit.ly/2019visxvision.

Tuesday, May 21

Canadian Vision Social

Tuesday, May 21, 12:30 – 2:30 pm, Jasmine/Palm

Organizer: Doug Crawford, York Centre for Vision Research

This lunch Social is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! This event will feature free food and refreshments, with a complementary beverage for the first 100 attendees. We particularly encourage trainees and scientists who would like to learn about the various opportunities available through York’s Vision: Science to Applications (VISTA) program. This event is sponsored by the York Centre for Vision Research and VISTA, which is funded in part by the Canada First Research Excellence Fund (CFREF)

Visibility: A Gathering of LGBTQ+ Vision Scientists and friends

Tuesday, May 21, 8:30 – 10:00 pm (precedes Club Vision), Jasmine/Palm

Organizers: Alex White, University of Washington; Michael Grubb, Trinity College

LGBTQ students are disproportionately likely to drop out of science early. Potential causes include the lack of visible role models and the absence of a strong community. This social event is one small step towards filling that gap. All are welcome. Snacks, drinks, and camaraderie will be provided. Sponsored by Trinity College.

Wednesday, May 22

MacGyver-ing in vision science: interfacing systems that are not supposed to work together

Wednesday, May 22, 1:00 – 3:00 pm, Chart
Organizer: Zoltan Derzsi, New York University Abu Dhabi

In research, it is sometimes necessary to push equipment beyond its design limits or to use it for something it was not designed to do. Desperation leads to creativity, and temporary workarounds end up being permanent. Usually this is the point when a design bottleneck is introduced into the experiment, which will bite back a couple of months later when nobody anticipates it, effectively ruining all the data collected (my own experience!).

This workshop will show some good practices on how to interface various systems, and how to use ordinary electronics in a vision science experiment.

You will get a free IoT (Internet of Things) kit containing a development board, some sensors, a display and light sources.

Please let me know if you plan to attend, by emailing zd8[at]nyu[dot]edu no later than the 10th of April!

The kit will contain a nodeMCU device, please make sure you pick it up on the first days of the conference. I will not be able to start from scratch on how to do programming and how to upload a firmware to the board, this will be included in the documentation and there is plenty of support online. I’d like to spend time showing how to make these bits into the cheapest calibrated D65 light source, how to automate data collection over the local network, how to build your own instruments, or simultaneously control various systems, while delivering stimuli with microsecond precision.

You will be able to adapt the workshop material for your own environment, and develop it further.

2018 Satellite Events

Wednesday, May 16

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 16 – Friday, May 18, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
8:30 – 11:45 am Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zygmunt Pizlo, UC Irvine; Anne B. Sereno, Purdue University; and Qasim Zaidi, SUNY College of Optometry

Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington

The 7th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the Tradewinds Island Resorts in St. Pete Beach, FL, May 16 – May 18. A keynote address will be given by Eero Simoncelli, New York University.

The early registration fee is $100 for regular participants, $50 for students. More information can be found on the workshop’s website: http://www.conf.purdue.edu/modvis/

Thursday, May 17

Eye Tracking in Virtual Reality

Thursday, May 17, 10:00 am – 3:00 pm, Jasmine/Palm

Organizer: Gabriel Diaz, Rochester Institute of Technology

This will be a hands-on workshop run by Gabriel Diaz, with support from his graduate students Kamran Binaee and Rakshit Kothari.

The ability to incorporate eye tracking into computationally generated contexts presents new opportunities for research into gaze behavior. The aim of this workshop is to provide an understanding of the hardware, data collection process, and algorithms for data analysis. Example data and code will be provided in two both Jupyter notebooks and Matlab (choose your preference). This workshop is sponsored by The Optical Society’s Vision Technical Group and is suitable for both PIs and graduate students.

Friday, May 18

Tutorial on Big Data and Online Crowd-Sourcing for Vision Research

Friday, May 18, 8:30 – 11:45 am, Jasmine/Palm

Organizer: Wilma Bainbridge, National Institutes of Health

Speakers: Wilma Bainbridge, National Institutes of Health; Tim Brady, University of California San Diego; Dwight Kravitz, George Washington University; and Gijsbert Stoet, Leeds Beckett University

Online experiments and Big Data are becoming big topics in the field of vision science, but can be hard to access for people not familiar with web development and coding. This tutorial will teach attendees the basics of creating online crowd-sourced experiments, and how to think about collecting and analyzing Big Data related to vision research. Four experts in the field will discuss how they use and collect Big Data, and give hands-on practice to tutorial attendees. We will discuss Amazon Mechanical Turk, its strengths and weaknesses, and how to leverage it in creative ways to collect powerful, large-scale data. We will then discuss Psytoolkit, an online experimental platform for coding timed behavioral and psychophysical tasks, that can integrate with Amazon Mechanical Turk. We will then discuss how to create Big Datasets using various ways of “scraping” large-scale data from the internet. Finally, we will discuss other sources of useful crowd-sourced data, such as performance on mobile games, and methods for scaling down and analyzing these large data sets.

To help us plan for this event, please register here: http://wilmabainbridge.com/research/bigdata/bigdataregistration.html

Sunday, May 20

FoVea (Females of Vision et al) Workshop

Sunday, May 20, 7:30 – 8:30 pm, Horizons

Organizers: Diane Beck, University of Illinois, Urbana-Champaign; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, Baycrest Health Sciences

Speaker: Virginia Valian, Hunter College
Title: Remedying the (Still) Too Slow Advancement of Women

Dr. Valian is a Distinguished Professor of Psychology and Director of The Gender Equity Project.

FoVea is a group founded to advance the visibility, impact, and success of women in vision science (www.foveavision.org). We encourage vision scientists of all genders to participate in the workshops.

Please register at: http://www.foveavision.org/vss-workshops

Monday, May 21

Psychophysics Toolbox Discussion

Monday, May 21, 2:00 – 3:00 pm, Talk Room 1

Organizer: Vijay Iyer, MathWorks

Panelists: Vijay Iyer, David Brainard, and Denis Pelli

Discussion of the current-state (technical, funding, community status) of the Psychophysics toolbox, widely used for visual stimulus generation in vision science experiments.

Social Hour for Faculty at Primarily Undergraduate Institutions (PUIs)

Monday, May 21, 2:00 – 4:00 pm, Royal Tern

Organizer: Katherine Moore, Arcadia University

Do you work at a primarily undergraduate institution (PUI)? Do you juggle your research program, student mentoring, and a heavy teaching load? If so, come along to the PUI social and get to know other faculty at PUIs! It will be a great opportunity to share your ideas and concerns. Feel free to bring your own drinks / snacks. Prospective faculty of PUIs are also welcome to attend and get to know us and our institutions.

Canadian Vision Social

Monday, May 21, 2:00 – 4:00 pm, Jasmine/Palm

Organizer: Doug Crawford, York Centre for Vision Research

This afternoon Social is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! This event will feature free snacks and refreshments, with a complementary beverage for the first 200 attendees. We particularly encourage trainees and scientists who would like to learn about the various research and training funds available through York’s Vision: Science to Applications (VISTA) program. This event is sponsored by the York Centre for Vision Research and VISTA, which is funded in part by the Canada First Research Excellence Fund (CFREF).

Tuesday, May 22

Virtual Reality as a Tool for Vision Scientists

Tuesday, May 22, 1:00 – 2:00 pm, Talk Room 1
Organizer: Matthias Pusch, WorldViz

In a hands on group session, we will show how Virtual Reality can be used by Vision Scientists for remote and on site collaborative experiments. Full experimental control over stimuli and reactions enable a unique setting for measuring performance. We will experience collaboration with off-site participants, and show the basics of performance data recording and analysis.

Vision Sciences Society