Wednesday, May 16
Computational and Mathematical Models in Vision (MODVIS)
Wednesday, May 16 – Friday, May 18, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
8:30 – 11:45 am Friday
Organizers: Jeff Mulligan, NASA Ames Research Center; Zygmunt Pizlo, UC Irvine; Anne B. Sereno, Purdue University; and Qasim Zaidi, SUNY College of Optometry
Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington
The 7th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the Tradewinds Island Resorts in St. Pete Beach, FL, May 16 – May 18. A keynote address will be given by Eero Simoncelli, New York University.
The early registration fee is $100 for regular participants, $50 for students. More information can be found on the workshop’s website: http://www.conf.purdue.edu/modvis/
Thursday, May 17
Eye Tracking in Virtual Reality
Thursday, May 17, 10:00 am – 3:00 pm, Jasmine/Palm
Organizer: Gabriel Diaz, Rochester Institute of Technology
This will be a hands-on workshop run by Gabriel Diaz, with support from his graduate students Kamran Binaee and Rakshit Kothari.
The ability to incorporate eye tracking into computationally generated contexts presents new opportunities for research into gaze behavior. The aim of this workshop is to provide an understanding of the hardware, data collection process, and algorithms for data analysis. Example data and code will be provided in two both Jupyter notebooks and Matlab (choose your preference). This workshop is sponsored by The Optical Society’s Vision Technical Group and is suitable for both PIs and graduate students.
Friday, May 18
Tutorial on Big Data and Online Crowd-Sourcing for Vision Research
Friday, May 18, 8:30 – 11:45 am, Jasmine/Palm
Organizer: Wilma Bainbridge, National Institutes of Health
Speakers: Wilma Bainbridge, National Institutes of Health; Tim Brady, University of California San Diego; Dwight Kravitz, George Washington University; and Gijsbert Stoet, Leeds Beckett University
Online experiments and Big Data are becoming big topics in the field of vision science, but can be hard to access for people not familiar with web development and coding. This tutorial will teach attendees the basics of creating online crowd-sourced experiments, and how to think about collecting and analyzing Big Data related to vision research. Four experts in the field will discuss how they use and collect Big Data, and give hands-on practice to tutorial attendees. We will discuss Amazon Mechanical Turk, its strengths and weaknesses, and how to leverage it in creative ways to collect powerful, large-scale data. We will then discuss Psytoolkit, an online experimental platform for coding timed behavioral and psychophysical tasks, that can integrate with Amazon Mechanical Turk. We will then discuss how to create Big Datasets using various ways of “scraping” large-scale data from the internet. Finally, we will discuss other sources of useful crowd-sourced data, such as performance on mobile games, and methods for scaling down and analyzing these large data sets.
To help us plan for this event, please register here: http://wilmabainbridge.com/research/bigdata/bigdataregistration.html
Sunday, May 20
FoVea (Females of Vision et al) Workshop
Sunday, May 20, 7:30 – 8:30 pm, Horizons
Organizers: Diane Beck, University of Illinois, Urbana-Champaign; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, Baycrest Health Sciences
Speaker: Virginia Valian, Hunter College
Title: Remedying the (Still) Too Slow Advancement of Women
Dr. Valian is a Distinguished Professor of Psychology and Director of The Gender Equity Project.
FoVea is a group founded to advance the visibility, impact, and success of women in vision science (www.foveavision.org). We encourage vision scientists of all genders to participate in the workshops.
Please register at: http://www.foveavision.org/vss-workshops
Monday, May 21
Psychophysics Toolbox Discussion
Monday, May 21, 2:00 – 3:00 pm, Talk Room 1
Organizer: Vijay Iyer, MathWorks
Panelists: Vijay Iyer, David Brainard, and Denis Pelli
Discussion of the current-state (technical, funding, community status) of the Psychophysics toolbox, widely used for visual stimulus generation in vision science experiments.
Social Hour for Faculty at Primarily Undergraduate Institutions (PUIs)
Monday, May 21, 2:00 – 4:00 pm, Royal Tern
Organizer: Katherine Moore, Arcadia University
Do you work at a primarily undergraduate institution (PUI)? Do you juggle your research program, student mentoring, and a heavy teaching load? If so, come along to the PUI social and get to know other faculty at PUIs! It will be a great opportunity to share your ideas and concerns. Feel free to bring your own drinks / snacks. Prospective faculty of PUIs are also welcome to attend and get to know us and our institutions.
Canadian Vision Social
Monday, May 21, 2:00 – 4:00 pm, Jasmine/Palm
Organizer: Doug Crawford, York Centre for Vision Research
This afternoon Social is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! This event will feature free snacks and refreshments, with a complementary beverage for the first 200 attendees. We particularly encourage trainees and scientists who would like to learn about the various research and training funds available through York’s Vision: Science to Applications (VISTA) program. This event is sponsored by the York Centre for Vision Research and VISTA, which is funded in part by the Canada First Research Excellence Fund (CFREF).
Tuesday, May 22
Virtual Reality as a Tool for Vision Scientists
Tuesday, May 22, 1:00 – 2:00 pm, Talk Room 1
Organizer: Matthias Pusch, WorldViz
In a hands on group session, we will show how Virtual Reality can be used by Vision Scientists for remote and on site collaborative experiments. Full experimental control over stimuli and reactions enable a unique setting for measuring performance. We will experience collaboration with off-site participants, and show the basics of performance data recording and analysis.