Some examples of datasets like CUB, Stanford Car, etc. are already given in our repository. You can use DCL to your datasets by simply converting annotations to train.txt/val.txt/test.txt and modify the class number in config.py as in line67: numcls=200.
To achieve the similar results of paper, please use the default parameter settings.
Citation
Please cite our CVPR19 paper if you use this codebase in your work:
@InProceedings{Chen_2019_CVPR,
author = {Chen, Yue and Bai, Yalong and Zhang, Wei and Mei, Tao},
title = {Destruction and Construction Learning for Fine-Grained Image Recognition},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
Destruction and Construction Learning for Fine-grained Image Recognition
By Yue Chen, Yalong Bai, Wei Zhang, Tao Mei
Special thanks to Yuanzhi Liang for code refactoring.
UPDATE Jun. 10, 2020
UPDATE Jun. 21, 2019
Our solution for the FGVC Challenge 2019 (The Sixth Workshop on Fine-Grained Visual Categorization in CVPR 2019) is updated!
With ensemble of several DCL based classification models, we won:
Introduction
This project is a DCL pytorch implementation of Destruction and Construction Learning for Fine-grained Image Recognition, CVPR2019.
Requirements
Python 3.6
Pytorch 0.4.0 or 0.4.1
CUDA 8.0 or higher
For docker environment:
For conda environment:
For more backbone supports in DCL, please check pretrainmodels and install:
Datasets Prepare
Download correspond dataset to folder ‘datasets’
Data organization: eg. CUB
All the image data are in ‘./datasets/CUB/data/‘ e.g. ‘./datasets/CUB/data/*.jpg’
The annotation files are in ‘./datasets/CUB/anno/‘ e.g. ‘./dataset/CUB/data/train.txt’
In annotations:
e.g. for CUB in repository:
Some examples of datasets like CUB, Stanford Car, etc. are already given in our repository. You can use DCL to your datasets by simply converting annotations to train.txt/val.txt/test.txt and modify the class number in
config.pyas in line67: numcls=200.Training
Run
train.pyto train DCL.For training CUB / STCAR / AIR from scratch
For training CUB / STCAR / AIR from trained checkpoint
For training FGVC product datasets from scratch
For training FGVC datasets from trained checkpoint
To achieve the similar results of paper, please use the default parameter settings.
Citation
Please cite our CVPR19 paper if you use this codebase in your work:
Find our more recent work:
Look-into-Object: Self-supervised Structure Modeling for Object Recognition. CVPR2020 [pdf, Source Code]