2019 Satellite Events

Wednesday, May 15

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 15 – Friday, May 17, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
8:30 – 11:45 am, Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zygmunt Pizlo, UC Irvine; Anne B. Sereno, Purdue University; and Qasim Zaidi, SUNY College of Optometry

Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington

The 8th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the Tradewinds Island Resorts in St. Pete Beach, FL, May 15 – May 17.

A keynote address will be given by Dr. Yanxi Liu, Penn State University.

The early registration fee is $100 for regular participants, $50 for students. After March 31st, the registration fee will increase to $120 (regular) and $60 (student).

Friday, May 17

Improving the precision of timing-critical research with visual displays

Friday, May 17, 9:00 – 11:00 am, Jasmine/Palm

Organizers: Sophie Kenny, VPixx Technologies; Peter April, VPixx Technologies

VPixx Technologies is a privately held company serving the vision research community by developing innovative hardware and software tools for vision scientists (www.vpixx.com).

Visual display and computer technologies have improved on many fronts over the years; however, impressive technical specifications of devices mask the fact that timing of concurrent events is not typically controlled with a high degree of precision. This is a problem for scientists whose research relies on synchronization of external recording equipment relative to the onset of a visual stimulus. During this workshop, we will demonstrate the use of hardware solutions to improve upon these issues. We will first describe the principle behind these hardware solutions. We will then showcase how experiments can be programmed to control the triggering of external devices, to play audio signals, and to record digital, analog and audio signals, all synchronized with microsecond accuracy to screen refresh.

To help us plan this event, please send an email signalling your interest to:

Psychophysics Toolbox Forum

Friday, May 17, 11:00 – 11:45 am, Jasmine/Palm

Organizer: Vijay Iyer, MathWorks

Forum for researchers, vendors, and others who work with the Psychophysics Toolbox (PTB) widely used for visual stimulus generation in vision science. MathWorks is pleased to support the PTB’s ongoing development, which is now hosted at the Medical Innovations Incubator (MII) in Tuebingen. A consortium led by industry is emerging to support the PTB project. Join to learn more about the new arrangement and to provide your input on future directions for PTB.

Saturday, May 18

Large-scale datasets in visual neuroscience

Saturday, May 18, 8:30 – 10:30 pm, Jasmine/Palm

Organizers: Elissa Aminoff, Fordham University; John Pyles, Carnegie Mellon University

Speakers: Elissa Aminoff, Fordham University; Kendrick Kay, University of Minnesota; John Pyles, Carnegie Mellon University; Michael Tarr, Carnegie Mellon University

The future of vision science lends itself more and more to using large real-world image datasets (n > 1,000) to study and understand the neural and functional mechanisms underlying vision. As the size of such datasets (and the resulting data) increases, there are commensurate challenges to effectively and successfully collect, distribute, and analyze large-scale data. If you are interested in discussing these challenges, please join us.

The format of this event will be brief presentations by researchers who have recently collected or analyzed large fMRI datasets, followed by an open discussion.

Sunday, May 19

FoVea (Females of Vision et al) Workshop

Sunday, May 19, 7:30 – 9:00 pm, Horizons

Organizers: Diane Beck, University of Illinois, Urbana-Champaign; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, Baycrest Health Sciences

Panel Discussion on Navigating a Life in Science as a Woman
Panel Discussants: Lynne Kiorpes (New York University), Ruth Rosenholtz (MIT), Preeti Verghese (Smith-Kettlewell Eye Research Institute), Emily Ward (University of Wisconsin – Madison)

The panel will begin by addressing issues they consider important/informative and then address questions.

FoVea is a group founded to advance the visibility, impact, and success of women in vision science (www.foveavision.org). We encourage vision scientists of all genders to participate in the workshops.

Please register at: http://www.foveavision.org/vss-workshops 

Monday, May 20

Aesthetics Social

Monday, May 20, 2:00 – 3:30 pm, Sabal/Sawgrass

Organizers: Edward Vessel, Max Planck Institute for Empirical Aesthetics; Karen Schloss, University Wisconsin-Madison; Aenne Brielmann (New York University); Ilkay Isik (MPIEA); Dominik Welke (MPIEA)

Our lives are full of aesthetic experiences. When we look at art, people surrounding us, or views out of the window, we cannot help but assess how much the sight pleases us. This social meeting brings together researchers interested in understanding such aesthetic responses. We will highlight aesthetics research being presented at VSS in a “Data Blitz” session, followed by an open discussion and time to socialize. Light refreshments will be offered.

Data Blitz presentations are open to anyone presenting aesthetics-related work at VSS. Selection for presentation will be made by the organizing committee based on scientific rigor, potential impact and interest, academic position (preference given to students/early stage researchers), and whether your work was selected for a talk or poster at VSS (priority given to posters).

If you are interested in presenting your findings at the Data Blitz session please send an email to  (ATTN: Aesthetics Social Data Blitz) by April 5, 2019 with the following information:

  • Presenter name, affiliation, and academic status (student/postdoc/PI/etc.)
  • Presenter contact information (email, phone)
  • Presentation title and abstract
  • Date/time and type of VSS presentation (poster/talk)

This event is sponsored by the International Association of Empirical Aesthetics (IAEA; https://www.science-of-aesthetics.org) and the Max Planck Institute for Empirical Aesthetics (MPIEA; https://www.aesthetics.mpg.de/en.html).

A hands-on crash course in reproducible mixed-effects modeling

Monday, May 20, 2:00 – 4:00 pm, Glades

Organizer: Dejan Draschkow, Department of Psychology, Goethe University Frankfurt; Department of Psychiatry, University of Oxford

Mixed-effects models are a powerful alternative to traditional F1/F2-mixed model/repeated-measure ANOVAs and multiple regressions. Mixed models allow simultaneous estimation of between-subject and between-stimulus variance, deal well with missing data, allow for easy inclusion of covariates and modelling of higher order polynomials. This workshop provides a focused, hands-on and state of the art treatment of applying this analysis technique in an open and reproducible way. We will provide a fully documented R pipeline, solutions for power analysis and will discuss common pitfalls and unresolved issues. It is suitable for 1) “concept attendance” – you want to be able to evaluate potential issues when reviewing a paper; 2) “implementation attendance” – strong theoretical background, low practical experience; 3) “switch attendance” – you are coming from another language or software and want to switch to R; 4) “transition attendance” – you are quite experienced in traditional analysis procedures and want to see what this is all about and 5) “refreshing attendance” – you just want to check if there are any new developments. It might not be suitable for participants with zero experience in statistics and programming and too boring for participants who perform simulation-based power analysis for mixed models or use a PCA to diagnose overfitting problems. This event is funded by a WikiMedia Open Science grant dedicated to https://smobsc.readthedocs.io/en/latest/.

No registration required. First come, first served, until full. For questions or more information, please visit my website at https://www.draschkow.com/.

WorldViz VR/AR Workshop: Virtual Reality Displays Break New Ground for Research Purposes

Monday, May 20, 2:00 – 4:00 pm, Jasmine/Palm

Organizers: Matthias Pusch, WorldViz; Lucero Rabaudi, WorldViz

Beyond the wave of consumer virtual reality displays is a new lineup of professional products that are capable of generating a new class of visual stimulus that can be used by scientists. We will show two examples of what we consider most exciting for the VSS community. The first is a multi-resolution HMD that is capable of nearly 60 cycles-per-degree over a large center field of the display which then feathers to more typical HMD resolution toward the periphery. The second is a low-latency high-resolution video-see-thru technology that converts a consumer class HMD into a sophisticated augmented reality system that can be used to combine real near field objects (e.g., one’s hands or tools) with computer graphics imagery.

In this Satellite session, we will present these technologies in action with examples of how researchers can use them in practice. There will be a technical portion of the session detailing the technologies benefits and limitations, as well as a hands-on portion for attendees to try the technologies live.

VISxVISION Workshop: Novel Vision Science Research Directions in Visualization

Monday, May 20, 2:00 – 4:00 pm, Royal Tern

Organizers: Cindy Xiong, Northwestern University; Zoya Bylinskii, Adobe Research; Madison Elliott, University of British Columbia; Christie Nothelfer, Nielsen; Danielle Szafir, University of Colorado Boulder

Interdisciplinary work across vision science and data visualization has provided a new lens to advance our understanding of the capabilities and mechanisms of the visual system while simultaneously improving the ways we visualize data. Vision scientists can gain important insights about human perception by studying how people interact with visualized data. Vision science topics, including visual search, ensemble coding, multiple object tracking, color and shape perception, pattern recognition, and saliency, map directly to challenges encountered in visualization research.

VISxVISION (www.visxvision.com) is an initiative to encourage communication and collaboration between researchers from the vision science and the data visualization research communities. Building on the growing interest on this topic and the discussions inspired by our symposium last year “Vision and Visualization: Inspiring novel research directions in vision science,” this workshop aims to provide a platform to bring together vision science and visualization researchers to share cutting-edge research at this interdisciplinary intersection. We also encourage researchers to share vision science projects that have the potential to be applied to topics in data visualization.

This year’s workshop will consist of a series of lightning talks, followed by a Q&A session with the presenters. Attendees will then learn about conference and publication opportunities in this field: Brian Fisher will review the IEEE Vis conference and benefits of collaborating within data visualization, and Editors from the Journal of Vision’s upcoming special visualization edition will discuss publishing in this area. The workshop will conclude with a “meet & mingle” session with refreshments, intended to encourage more informal discussion among participants and to inspire interdisciplinary collaboration.

This event is being sponsored by Adobe Inc., the Visual Thinking Lab at Northwestern, and Colorado Boulder’s VisuaLab.

A call for abstracts on https://visxvision.com will solicit recent, relevant research at the intersection of vision science and visualization, or vision science project proposals that have the potential to be applied to topics in data visualization (deadline: April 8).  The top submissions will be selected for presentation as lightning talks at the workshop (notification: April 15). Submit your abstract here: http://bit.ly/2019abstract

Please register for the event at: http://bit.ly/2019visxvision.

Tuesday, May 21

Canadian Vision Social

Tuesday, May 21, 12:30 – 2:30 pm, Jasmine/Palm

Organizer: Doug Crawford, York Centre for Vision Research

This lunch Social is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! This event will feature free food and refreshments, with a complementary beverage for the first 100 attendees. We particularly encourage trainees and scientists who would like to learn about the various opportunities available through York’s Vision: Science to Applications (VISTA) program. This event is sponsored by the York Centre for Vision Research and VISTA, which is funded in part by the Canada First Research Excellence Fund (CFREF)

Visibility: A Gathering of LGBTQ+ Vision Scientists and friends

Tuesday, May 21, 8:30 – 10:00 pm (precedes Club Vision), Jasmine/Palm

Organizers: Alex White, University of Washington; Michael Grubb, Trinity College

LGBTQ students are disproportionately likely to drop out of science early. Potential causes include the lack of visible role models and the absence of a strong community. This social event is one small step towards filling that gap. All are welcome. Snacks, drinks, and camaraderie will be provided. Sponsored by Trinity College.

Wednesday, May 22

MacGyver-ing in vision science: interfacing systems that are not supposed to work together

Wednesday, May 22, 1:00 – 3:00 pm, Chart
Organizer: Zoltan Derzsi, New York University Abu Dhabi

In research, it is sometimes necessary to push equipment beyond its design limits or to use it for something it was not designed to do. Desperation leads to creativity, and temporary workarounds end up being permanent. Usually this is the point when a design bottleneck is introduced into the experiment, which will bite back a couple of months later when nobody anticipates it, effectively ruining all the data collected (my own experience!).

This workshop will show some good practices on how to interface various systems, and how to use ordinary electronics in a vision science experiment.

You will get a free IoT (Internet of Things) kit containing a development board, some sensors, a display and light sources.

Please let me know if you plan to attend, by emailing zd8[at]nyu[dot]edu no later than the 10th of April!

The kit will contain a nodeMCU device, please make sure you pick it up on the first days of the conference. I will not be able to start from scratch on how to do programming and how to upload a firmware to the board, this will be included in the documentation and there is plenty of support online. I’d like to spend time showing how to make these bits into the cheapest calibrated D65 light source, how to automate data collection over the local network, how to build your own instruments, or simultaneously control various systems, while delivering stimuli with microsecond precision.

You will be able to adapt the workshop material for your own environment, and develop it further.