This repository contains a Jittor implementation of a PCT (Point Cloud
Transformer) baseline for ModelNet40 point-cloud classification. The model takes
2048 sampled 3D points per object and predicts one of 40 shape classes.
Environment Installation
Tested locally with:
Windows 11
Python 3.9
Jittor 1.3.8.5
NVIDIA RTX 4070 Laptop GPU, CUDA enabled by Jittor
Install dependencies:
pip install -r requirements.txt
If using conda, the tested interpreter is:
C:\Users\Lenovo\anaconda3\envs\jittor\python.exe
Data Preparation
Put the competition-provided ModelNet40 point-cloud files under data/:
The .npy files are ignored by git and should not be submitted to the code
repository.
Check the data files:
python .\tools\inspect_data.py --data_dir .\data
Training
Recommended Windows laptop command:
python .\pct.py --config .\configs\default.yaml
A PowerShell wrapper is also provided:
.\scripts\train.ps1
configs/default.yaml is loaded by default and is the recommended place to edit training parameters. Command-line arguments can still override config values when needed.
Evaluation / Inference
Generate the submission JSON from a trained checkpoint:
Keys are test sample ids as strings, values are predicted class ids.
Result Description
The local validation metric is classification accuracy on a held-out split from
train_points.npy. The online score may differ because the final submission is
evaluated on the hidden labels of test_points.npy.
On this laptop, one epoch with n_points=1024, batch_size=8, and
num_workers=0 takes about 93 seconds after Jittor compilation. Full 200-epoch
training is expected to take roughly 5 to 6 hours.
Reproducibility
Use --seed to set NumPy and Jittor random seeds.
Keep generated logs, checkpoints, and submission files under outputs/.
outputs/ is ignored by git to avoid committing training artifacts.
# ModelNet40 PCT Classification with Jittor
This repository contains a Jittor implementation of a PCT (Point Cloud Transformer) baseline for ModelNet40 point-cloud classification. The model takes 2048 sampled 3D points per object and predicts one of 40 shape classes.
Environment Installation
Tested locally with:
Install dependencies:
If using conda, the tested interpreter is:
Data Preparation
Put the competition-provided ModelNet40 point-cloud files under
data/:The
.npyfiles are ignored by git and should not be submitted to the code repository.Check the data files:
Training
Recommended Windows laptop command:
A PowerShell wrapper is also provided:
configs/default.yamlis loaded by default and is the recommended place to edit training parameters. Command-line arguments can still override config values when needed.Evaluation / Inference
Generate the submission JSON from a trained checkpoint:
Or run:
The output format is a JSON dictionary:
Keys are test sample ids as strings, values are predicted class ids.
Result Description
The local validation metric is classification accuracy on a held-out split from
train_points.npy. The online score may differ because the final submission is evaluated on the hidden labels oftest_points.npy.On this laptop, one epoch with
n_points=1024,batch_size=8, andnum_workers=0takes about 93 seconds after Jittor compilation. Full 200-epoch training is expected to take roughly 5 to 6 hours.Reproducibility
--seedto set NumPy and Jittor random seeds.outputs/.outputs/is ignored by git to avoid committing training artifacts.Project Structure