目录

PCT_jittor

A Jittor implementation of Point Cloud Transformer (PCT) for ModelNet40 point-cloud classification.

This repository contains the training code, distributed training scripts, and experiment report for PA3. Dataset files, checkpoints, logs, and generated prediction files are intentionally excluded from Git because they are large or reproducible artifacts.

Project Structure

.
├── pct.py                  # Dataset, augmentation, PCT model, training, prediction
├── pct_prev.py             # Initial reference code used for comparison
├── scripts/
│   ├── train_8gpu.sh       # OpenMPI/Jittor 8-GPU launcher
│   └── mpi_rank.sh         # MPI local-rank to GPU binding
├── REPORT.md              # Experiment report
├── README.md
├── LICENSE
└── .gitignore

Expected local data layout:

data/
├── train_points.npy
├── train_labels.npy
├── test_points.npy
└── categories.txt

The data/ directory is ignored by Git.

Environment

The code is tested with:

  • Linux
  • Python 3.8
  • Jittor 1.3.10.0
  • CUDA GPU
  • OpenMPI 4.1.x for distributed training

Install Jittor according to the official documentation:

pip install jittor

For 8-GPU training, install OpenMPI in the same environment:

conda install -c conda-forge openmpi

Single-GPU Training

python pct.py --epochs 200 --n_points 1024 --batch_size 32

Useful options:

python pct.py \
  --epochs 200 \
  --n_points 1024 \
  --batch_size 32 \
  --optimizer sgd \
  --scheduler cosine

8-GPU Training

Jittor uses MPI for multi-GPU training. Run:

bash scripts/train_8gpu.sh

Choose specific GPUs:

GPUS=0,1,2,3,4,6,7,8 bash scripts/train_8gpu.sh

Override common training parameters:

EPOCHS=200 N_POINTS=1024 BATCH_SIZE=64 NUM_WORKERS=4 bash scripts/train_8gpu.sh

In MPI mode, Jittor shards the dataset and synchronizes gradients. The batch_size argument is the global batch size across all ranks. Rank 0 saves the model and exports the full prediction file.

Large generated files are written under /data/qiaojiaxuan by default:

  • Jittor cache: /data/qiaojiaxuan/jittor_home
  • Model: /data/qiaojiaxuan/PA3/outputs/pct_model.pkl
  • Predictions: /data/qiaojiaxuan/PA3/outputs/result.json
  • Log: /data/qiaojiaxuan/PA3/outputs/train_8gpu.log

Result

The final 8-GPU training run used 200 epochs, 1024 input points, global batch size 64, SGD, and cosine annealing. The final training metrics were:

Epoch [200/200]  Loss: 0.2982  Train Acc: 90.27%  LR: 0.000010

The generated result.json contains 2468 test predictions.

Prediction Format

result.json is a dictionary from test sample index to predicted class id:

{
  "0": 12,
  "1": 3
}

License

This project is released under the MIT License.

关于

A Jittor implementation of Point Cloud Transformer (PCT) for ModelNet40 classification

43.0 KB
邀请码
    Gitlink(确实开源)
  • 加入我们
  • 官网邮箱:gitlink@ccf.org.cn
  • QQ群
  • QQ群
  • 公众号
  • 公众号

版权所有:中国计算机学会技术支持:开源发展技术委员会
京ICP备13000930号-9 京公网安备 11010802032778号