This paper focuses on two important and practical aspects that are not well addressed in existing deep learning based works, i.e., embedding in arbitrary resolution (especially high resolution) images, and robustness against complex attacks.
To overcome these limitations, we propose a blind watermarking framework (called DWSF) which mainly consists of three novel components, i.e., dispersed embedding, watermark synchronization and message fusion.
If you find this work useful, please cite our paper:
@inproceedings{guo2023practical,
title={Practical Deep Dispersed Watermarking with Synchronization and Fusion},
author={Guo, Hengchang and Zhang, Qilong and Luo, Junwei and Guo, Feng and Zhang, Wenbin and Su, Xiaodong and Li, Minglei},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
pages={7922--7932},
year={2023}
}
Practical Deep Dispersed Watermarking with Synchronization and Fusion
This repository is the official implementation of Practical Deep Dispersed Watermarking with Synchronization and Fusion.
Introduction
This paper focuses on two important and practical aspects that are not well addressed in existing deep learning based works, i.e., embedding in arbitrary resolution (especially high resolution) images, and robustness against complex attacks. To overcome these limitations, we propose a blind watermarking framework (called DWSF) which mainly consists of three novel components, i.e., dispersed embedding, watermark synchronization and message fusion.
Dependencies
environment
dataset
COCO2017
ImageNet
OpenImages
LabelMe
Usage
training
train encoder_decoder
train segmentation model
evaluating
citation
If you find this work useful, please cite our paper: