FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation
This repository contains the pytorch codes and trained models described in the CVPR2023 paper “”. This algorithm is proposed by ByteDance, Intelligent Creation, AutoML Team (字节跳动-智能创作-AutoML团队).
Authors: Jie Qin, Jie Wu, Pengxiang Yan, Ming Li, Ren Yuxi, Xuefeng Xiao, Yitong Wang, Rui Wang, Shilei Wen, Xin Pan, Xingang Wang
We follow Mask2Former to build some datasets used in our experiments. The datasets are assumed to exist in a directory specified by the environment variable DETECTRON2_DATASETS. Under this directory, detectron2 will look for datasets in the structure described below, if needed.
coco/
annotations/
instances_{train,val}2017.json
panoptic_{train,val}2017.json
{train,val}2017/
# image files that are mentioned in the corresponding json
panoptic_{train,val}2017/ # png annotations
stuffthingmaps/
Then transform the data to detecttron2 style and split it into Seen (Base) subset and Unseen (Novel) subset.
If you find this work useful in your method, you can cite the paper as below:
@inproceedings{qin2023freeseg,
title={FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation},
author={Qin, Jie and Wu, Jie and Yan, Pengxiang and Li, Ming and Yuxi, Ren and Xiao, Xuefeng and Wang, Yitong and Wang, Rui and Wen, Shilei and Pan, Xin and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={19446--19455},
year={2023}
}
FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation
This repository contains the pytorch codes and trained models described in the CVPR2023 paper “”. This algorithm is proposed by ByteDance, Intelligent Creation, AutoML Team (字节跳动-智能创作-AutoML团队).
Authors: Jie Qin, Jie Wu, Pengxiang Yan, Ming Li, Ren Yuxi, Xuefeng Xiao, Yitong Wang, Rui Wang, Shilei Wen, Xin Pan, Xingang Wang
Overview
Installation
Environment
Other dependency
The modified clip package.
CUDA kernel for MSDeformAttn
Dataset Preparation
We follow Mask2Former to build some datasets used in our experiments. The datasets are assumed to exist in a directory specified by the environment variable
DETECTRON2_DATASETS. Under this directory, detectron2 will look for datasets in the structure described below, if needed.You need to set the location for builtin datasets by
export DETECTRON2_DATASETS=/path/to/datasets.#
Expected dataset structure for COCO:
Then transform the data to detecttron2 style and split it into Seen (Base) subset and Unseen (Novel) subset.
#
Expected dataset structure for VOC2012:
Then transform the data to detecttron2 style and split it into Seen (Base) subset and Unseen (Novel) subset.
Getting Started
Training
To train a model with “train_net.py”, first make sure the preparations are done. Take the training on COCO as an example.
Training prompts
Training model
Evaluation
Testing for Demo
The model weight for demo can get from model.
Citation
If you find this work useful in your method, you can cite the paper as below: