A better way to learn skills and concepts

General purpose vision (GPV) systems are models that are designed to solve a wide array of visual tasks without requiring architectural changes. Today, GPVs primarily learn both skills and concepts from large fully supervised datasets. Scaling GPVs to tens of thousands of concepts by acquiring data to learn each concept for every skill quickly becomes prohibitive. This work presents an effective and inexpensive alternative: learn skills from fully supervised datasets, learn concepts from web image search results, and leverage a key characteristic of GPVs -- the ability to transfer visual knowledge across skills. We use a dataset of 1M+ images spanning 10k+ visual concepts to demonstrate webly-supervised concept expansion for two existing GPVs on 3 benchmarks - 5 COCO based datasets (80 primary concepts), a newly curated series of 5 datasets based on the OpenImages and VisualGenome repositories (~500 concepts) and the Web-derived dataset (10k+ concepts). We also propose a new architecture, GPV-2 that supports a variety of tasks -- from vision tasks like classification and localization to vision+language tasks like QA and captioning to more niche ones like human-object interaction recognition. GPV-2 benefits hugely from web data, outperforms GPV-1 and VL-T5 across these benchmarks, and does well in a 0-shot setting at action and attribute recognition.

GPV-2 Teaser Figure

The Web10K Dataset

We present Web10K, a dataset sourced from web image search data with over 10K concepts. Web10K also provides dense coverage of feasible adj-noun and verb-noun combinations to enable learning of fine-grained differences in object appearance due to attributes.

The code to download the Web10K dataset is on Github.

Web-10K Figure

GPV-2

GPV-2 combines an object detector with the T5 pre-trained language model. GPV-2 supports additional input and output modalities (and thus tasks) beyond present day GPVs (GPV-1 and VL-T5). It uses the stronger VinVL object detector, uses a shared language decoder (for all tasks including localization) and employs a classification re-calibration approach that together improve generalization to unseen concepts at test time.

GPV-2 Architecture Figure

DCE Benchmark

We introduce the Diverse Concept Evaluation (DCE) benchmark to evaluate GPV models on a large subset of the 600 OpenImages categories across 5 skills: classification (Cls), classification-in-context (CiC), captioning (Cap), localization (Loc), and visual question answering (VQA).

Qualitative Results

GPV-2 results on DCE:
GPV-2 Qualitative Results
GPV-2 results on DCE with and without web data:
GPV-2 Qualitative Results with and without Web data

Paper

Webly Supervised Concept Expansion for General Purpose Vision Models

Amita Kamath*, Christopher Clark*, Tanmay Gupta*, Eric Kolve, Derek Hoiem, and Aniruddha Kembhavi ECCV  2022