Our goal for this workshop is to educate researchers about the technological needs of people with vision impairments while empowering researchers to improve algorithms to meet these needs. A key component of this event will be to track progress on six dataset challenges, where the tasks are to answer visual questions, answer grounding, recognize visual questions with multiple answer groundings, recognize objects in few-shot learning scenarios, locate objects in few-shot learning scenarios, and zero-shot image classification. The second key component of this event will be a discussion about current research and application issues, including invited speakers from both academia and industry who will share their experiences in building today’s state-of-the-art assistive technologies as well as designing next-generation tools.
- Thursday, January 11: challenge submissions announced
- Friday, January 12 [9:00 AM Central Standard Time]: challenges go live
- Friday, May 3 [9:00 AM Central Standard Time]: challenge submissions due
- Friday, May 10 [9:00 AM Central Standard Time]: extended abstracts due
- Friday, May 17 [5:59 PM Central Standard Time]: notification to authors about decisions for extended abstracts
We invite two types of submissions:
We invite submissions about algorithms for the following six challenge tasks: visual question answering, answer grounding, single answer grounding recognition, few-shot video object recognition, few-shot private object localization, and zero-shot image classification. We accept submissions for algorithms that are not published, currently under review, and already published.
The teams with the top-performing submissions will be invited to give short talks during the workshop.
We invite submissions of extended abstracts on topics related to all challenge tasks as well as assistive technologies for people with visual impairments. Papers must be at most two pages (with references) and follow the CVPR formatting guidelines using the provided author kit. Reviewing will be single-blind and accepted papers will be presented as posters. We will accept submissions on work that is not published, currently under review, and already published. There will be no proceedings. Please send your extended abstracts to email@example.com.
Please note that we will require all camera-ready content to be accessible via a screen reader. Given that making accessible PDFs and presentations may be a new process for some authors, we will host training sessions beforehand to both educate and assist all authors to succeed in making their content accessible.
- 8:00-8:15 am: Opening remarks
- 8:15-8:30 am: Overview of three challenges related to VQA (VQA, Answer Grounding, Single Answer Grounding Recognition), winner announcements, and talks by challenge winners
- 8:30-9:00 am: Invited talk and Q&A with computer vision researcher (Soravit Beer Changpinyo).
- 9:00-9:30 am: Invited talk and Q&A with Aira representative (Troy Ottilio).
- 9:30-9:45 am: Poster spotlight talks
- 9:45-10:15 am: Poster session and break
- 10:15-10:30 am: Overview of three zero-shot and few-shot learning challenges (few-shot video object recognition, few-shot private object localization, zero-shot classification), winner announcements, and talk by challenge winner
- 10:30-11:00 am: Invited talk and Q&A with blind comedian and writer (Brian Fischler).
- 11:00-11:30 am: Invited talk and Q&A with linguistics expert, Elisa Kreiss
- 11:30-12:00 pm: Open Q&A panel with four invited speakers
- 12:00-12:05 pm: Closing remarks
Invited Speakers and Panelists:
That Real Blind Tech Show
University of Colorado Boulder
Carnegie Mellon University, Apple
University of Texas at Austin