@inproceedings{fu2022robust,
title={Robust Unlearnable Examples: Protecting Data Against Adversarial Learning},
author={Shaopeng Fu and Fengxiang He and Yang Liu and Li Shen and Dacheng Tao},
booktitle={International Conference on Learning Representations},
year={2022}
}
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
This is the official repository for ICLR 2022 paper “Robust Unlearnable Examples: Protecting Data Against Adversarial Learning” by Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen and Dacheng Tao.
Requirements
Install dependencies using pip
Install dependencies using Anaconda
It is recommended to create your experiment environment with Anaconda3.
Quick Start
We give an example of creating robust unlearnable examples from CIFAR-10 dataset. More experiment examples can be found in ./scripts.
Generate robust error-minimizing noise for CIFAR-10 dataset
Perform adversarial training on robust unlearnable examples
Citation
Acknowledgment