Lastly, to reformat the validation set, under the folder “data/tiny-imagenet-200”, run:
python3 preprocess_tiny_imagenet.py
Running Instructions
Shell scripts to reproduce experimental results in our paper are under “run_scripts” folder. Simply changing the “ALPHA” variable to run under different degree of heterogeneity.
Here are commands that replicate our results:
FedAvg on CIFAR10:
bash run_scripts/cifar10_fedavg.sh
FedAvg + FedDecorr on CIFAR10:
bash run_scripts/cifar10_fedavg_feddecorr.sh
Experiments on other methods (FedAvgM, FedProx, MOON) and other datasets (CIFAR100, TinyImageNet) follow the similar manner.
Citation
If you find our repo/paper helpful, please consider citing our work :)
@article{shi2022towards,
title={Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning},
author={Shi, Yujun and Liang, Jian and Zhang, Wenqing and Tan, Vincent YF and Bai, Song},
journal={arXiv preprint arXiv:2210.00226},
year={2022}
}
Introduction
This Repo contains the official implementation of the following paper:
and unofficial implementation of the following papers:
Dataset preprocessing
TinyImageNet:
Running Instructions
Shell scripts to reproduce experimental results in our paper are under “run_scripts” folder. Simply changing the “ALPHA” variable to run under different degree of heterogeneity.
Here are commands that replicate our results:
FedAvg on CIFAR10:
FedAvg + FedDecorr on CIFAR10:
Experiments on other methods (FedAvgM, FedProx, MOON) and other datasets (CIFAR100, TinyImageNet) follow the similar manner.
Citation
If you find our repo/paper helpful, please consider citing our work :)
Contact
Yujun Shi (shi.yujun@u.nus.edu)
Acknowledgement
Some of our code is borrowed following projects: MOON, NIID-Bench, SAM(Pytorch)