About One-Shot Part Labeling

One-shot part labeling Example

Diagrams often depict complex phenomena and serve as a good test bed for visual and textual reasoning. However, understanding diagrams using natural image understanding approaches requires large training datasets of diagrams, which are very hard to obtain. Instead, this can be addressed as a matching problem either between labeled diagrams, images or both. This problem is very challenging since the absence of significant color and texture renders local cues ambiguous and requires global reasoning. We consider the problem of one-shot part labeling: labeling multiple parts of an object in a target image given only a single source image of that category. For this set-to-set matching problem, we introduce the Structured Set Matching Network (SSMN), a structured prediction model that incorporates convolutional neural networks. The SSMN is trained using global normalization to maximize local match scores between corresponding elements and a global consistency score among all matched elements, while also enforcing a matching constraint between the two sets. The SSMN significantly outperforms several strong baselines on three label transfer scenarios: diagram-to-diagram, evaluated on a new diagram dataset of over 200 categories; image-toimage, evaluated on a dataset built on top of the Pascal Part Dataset; and image-to-diagram, evaluated on transferring labels across these datasets.

Paper

Structured Set Matching Networks for One-Shot Part Labeling

Jonghyun Choi, Jayant Krishnamurthy, Aniruddha Kembhavi, and Ali Farhadi CVPR  2018

Model

Structured Set Matching Network (SSMN)

For a given source and target image pair and the corresponding part locations, we first compute part embeddings by CNNs and appearance similarity scores with context by a LSTM. We call this as “Appearance Matching Network”. Then we compute embeddings of part name to form a factor graph with the appearance similarity. On the nodes in the graph, we enforce set matching and global consistency constraints.

SSMN Model

Results

Comparative results with baselines and state-of-the-art models.

SSMN Results

In both diagram and image domains, we evaluate three trivial baselines, two state of the art methods, our extension with Hungarian algorithm and our SSMN and its ablated models. The SSMN improves the set matching accuracy by 16.8% in diagram and 6.4% in image to the state-of-the-art Matching-Network [48].

Explore Datasets

One-Shot Part Labeling Screenshot

On the dataset explorer page, you can browse the dataset by subject matter or search by keyword.

DiPART Dataset

Images 4,921
Annotated Parts 49,210
Total Pairs (Training/Validation/Testing) 71,521 (50,835/10,631/10,055)

DiPART is a dataset of part-annotated diagrams for evaluating one-shot part labeling algorithms. It contains challenging diagram pairs to match.

Pascal Part Matching (PPM) Dataset

Images 2,326
Annotated Parts 23,260
Total Pairs (Training/Validation/Testing) 55,450 (37,330/9,060/9,060)

Pascal Part Matching is a dataset of part-annotated images for evaluating one-shot part labeling algorithms with images. It is a subset of Pascal Part dataset.

Cross-DiPART-PPM Dataset

Images 1,043
Annotated Parts 4,172
Total Pairs (Training/Validation/Testing) 27,449 (18,489/4,480/4,480)

Cross-DiPART-PPM is a dataset of part-annotated diagrams and images for evaluating one-shot part labeling algorithms in a cross domain setting (diagram to image). It is curated by subsets of DiPART and PPM datasets.

For questions about these datasets, please contact ai2-data@allenai.org.

Download Links