Official Implemetation of ConfDiff (ICML’24) - Protein Conformation Generation via Force-Guided SE(3) Diffusion Models
The repository is the official implementation of the ICML24 paper Protein Conformation Generation via Force-Guided SE(3) Diffusion Models, which introduces ConfDiff, a force-guided SE(3) diffusion model for protein conformation generation. ConfDiff can generate protein conformations with rich diversity while preserving high fidelity. Physics-based energy and force guidance strategies effectively guide the diffusion sampler to generate low-energy conformations that better align with the underlying Boltzmann distribution.
With recent progress in protein conformation prediction, we extend ConfDiff to ConfDiff-FullAtom, diffusion models for full-atom protein conformation prediction. Current models include the following updates:
Integrated a regression module to predict atomic coordinates for side-chain heavy atoms
Provided models options with four folding model representations (ESMFold or OpenFold with recycling number of 0 and 3)
Used all feature outputs from the folding model (node + edge) for diffusion model training.
Released a version of sequence-conditional models fine-tuned on the Atlas MD dataset.
We precompute ESMFold and OpenFold representations as inputs to the model. The detailed generation pipline can be referenced in the README of the pretrained_repr/ folder.
ConfDiff-BASE
ConfDiff-BASE employs a sequence-based conditional score network to guide an unconditional score model using classifier-free guidance, enabling diverse conformation sampling while ensuring structural fidelity to the input sequence.
Prepare datasets
We train Confdiff-BASE using the protein structures from Protein Data Bank, and evaluate on various datasets including fast-folding, bpti, apo-holo and atlas.
Details on dataset prepration and evaluation can be found in the dataset folder.
The following datasets and pre-computed representations are required to train Confdiff-BASE:
RCSB PDB dataset: See dataset/rcsb for details. Once prepared, specify the csv_path and pdb_dir in the configuration file configs/paths/default.yaml.
ESMFold or OpenFold representations: See pretrained_repr for details. Once prepared, specify the data_root of esmfold_repr/openfold_repr in the configuration file configs/paths/default.yaml.
Training
ConfDiff-BASE consists of a sequence-conditional model and an unconditional model.
To sample conformations using the ConfDiff-BASE model:
#Please note that the model and representation need to be compatible.
python3 src/eval.py \
task_name=eval_base_bpti \
experiment=clsfree_guide \
data/repr_loader=openfold \
data.repr_loader.num_recycles=3 \
paths.guidance.cond_ckpt=/path/to/your/cond_model \
paths.guidance.uncond_ckpt=/path/to/your/uncond_model \
data.dataset.test_gen_dataset.csv_path=/path/to/your/testset_csv \
data.dataset.test_gen_dataset.num_samples=1000 \
data.gen_batch_size=20 \
model.score_network.cfg.clsfree_guidance_strength=0.8
ConfDiff-FORCE/ENERGY
By utilizing prior information from the MD force field, our model effectively reweights the generated conformations to ensure they better adhere to the equilibrium distribution
Data
Protein conformations with force or energy labels are required to train the corresponding ConfDiff-FORCE or ConfDiff-ENERGY.
We use OpenMM for energy and force evaluation of the conformation samples generated by ConfDiff-BASE
To evaluate force and energy labels using OpenMM and prepare the training data:
The output directory /path/to/your/output_dir contains force annotation files (with the suffix *force.npy), optimized energy PDB files, and train and validation CSV files with energy labels.
Training
Before training, please ensure that the pretrained representations for the training proteins have been prepared.
To train ConfDiff-FORCE:
# case for training ConfDiff-FORCE
python3 src/train.py \
experiment=force_guide \
data/repr_loader=esmfold \
data.repr_loader.num_recycles=3 \
paths.guidance.cond_ckpt=/path/to/your/cond_model \
paths.guidance.uncond_ckpt=/path/to/your/uncond_model \
paths.guidance.train_csv=/path/to/your/output_dir/train.csv \
paths.guidance.val_csv=/path/to/your/output_dir/val.csv \
paths.guidance.pdb_dir=/path/to/your/output_dir/ \
data.train_batch_size=4
Similarly, the ConfDiff-ENERGY model can be trained by setting experiment=energy_guide.
Detailed training configurations can be found in the file configs/experiment/force_guide(energy_guide).yaml.
Model Checkpoints
Access pretrained ConfDiff-FORCE/ENERGY with ESMFold representations on different datasets.
We found that using only the node representation’s on the fast-folding dataset yields better results. To train models with only node representation, set data.repr_loader.edge_size=0
Inference
To sample conformations using the ConfDiff-FORCE/ENERGY model:
# case for generating samples by ConfDiff-FORCE
python3 src/eval.py \
task_name=eval_force \
experiment=force_guide \
data/repr_loader=esmfold \
data.repr_loader.num_recycles=3 \
ckpt_path=/path/to/your/model/ckpt/ \
data.dataset.test_gen_dataset.csv_path=/path/to/your/testset_csv \
data.dataset.test_gen_dataset.num_samples=1000 \
data.gen_batch_size=20 \
model.score_network.cfg.clsfree_guidance_strength=0.8 \
model.score_network.cfg.force_guidance_strength=1.0
# data.repr_loader.edge_size=0 for pretrained checkpoints on fast-folding
We benchmark model performance on following datasets: BPTI, fast-folding, Apo-holo, and Atlas. Evaluation details can be found in the datasets folder and the notebook notebooks/analysis.ipynb.
ConfDiff-XXX-ClsFree refers to the ConfDiff-BASE model utilizing classifier-free guidance sampling with the ConfDiff-XXX-COND and ConfDiff-UNCOND models. As described in the paper, all results are based on ensemble sampling with varying levels of classifier-guidance strength. For the fast-folding dataset, the classifier-guidance strength values range from 0.5 to 1.0, while for other datasets, the range is 0.8 to 1.0. For BPTI and fast-folding, we also provide results from the pretrained FORCE and ENERGY models.
BPTI
RMSDens
Pairwise RMSD
Best RMSD to Cluster 3
CA-Break Rate %
PepBond-Break Rate %
ConfDiff-ESM-r3-ClsFree
1.39
1.80
2.32
0.5
7.5
ConfDiff-ESM-r3-Energy
1.41
1.22
2.39
0.1
7.5
ConfDiff-ESM-r3-Force
1.34
1.76
2.18
0.1
8.9
The guidance strength is set to 1.5 for the FORCE model and 1.0 for the ENERGY model.
Fast-Folding
JS-PwD
JS-Rg
JS-TIC
JS-TIC2D
Val-Clash (CA)
ConfDiff-ESM-r0-ClsFree
0.32/0.32
0.29/0.30
0.37/0.38
0.54/0.52
0.903/0.935
ConfDiff-ESM-r0-Energy
0.39/0.40
0.37/0.36
0.41/0.43
0.58/0.58
0.991/0.994
ConfDiff-ESM-r0-Force
0.34/0.33
0.31/0.30
0.40/0.44
0.58/0.60
0.975/0.982
The models here utilize only the pretrained node representation. The force guidance strength is set to 2.0 for the FORCE model and 1.0 for the ENERGY model.
Official Implemetation of ConfDiff (ICML’24) - Protein Conformation Generation via Force-Guided SE(3) Diffusion Models
The repository is the official implementation of the ICML24 paper Protein Conformation Generation via Force-Guided SE(3) Diffusion Models, which introduces ConfDiff, a force-guided SE(3) diffusion model for protein conformation generation. ConfDiff can generate protein conformations with rich diversity while preserving high fidelity. Physics-based energy and force guidance strategies effectively guide the diffusion sampler to generate low-energy conformations that better align with the underlying Boltzmann distribution.
With recent progress in protein conformation prediction, we extend ConfDiff to ConfDiff-FullAtom, diffusion models for full-atom protein conformation prediction. Current models include the following updates:
Installation
Pretrained Representation
We precompute ESMFold and OpenFold representations as inputs to the model. The detailed generation pipline can be referenced in the
READMEof thepretrained_repr/folder.ConfDiff-BASE
ConfDiff-BASE employs a sequence-based conditional score network to guide an unconditional score model using classifier-free guidance, enabling diverse conformation sampling while ensuring structural fidelity to the input sequence.
Prepare datasets
We train Confdiff-BASE using the protein structures from Protein Data Bank, and evaluate on various datasets including fast-folding, bpti, apo-holo and atlas. Details on dataset prepration and evaluation can be found in the
datasetfolder.The following datasets and pre-computed representations are required to train Confdiff-BASE:
dataset/rcsbfor details. Once prepared, specify thecsv_pathandpdb_dirin the configuration fileconfigs/paths/default.yaml.pretrained_reprfor details. Once prepared, specify thedata_rootofesmfold_repr/openfold_reprin the configuration fileconfigs/paths/default.yaml.Training
ConfDiff-BASE consists of a sequence-conditional model and an unconditional model.
To train the conditional model:
The detailed training configuration can be found in
configs/experiment/full_atom.yaml.To train the unconditonal model:
The detailed training configuration can be found in
configs/experiment/uncond.yaml.Model Checkpoints
Access pretrained models with different pretrained representations:
Inference
To sample conformations using the ConfDiff-BASE model:
ConfDiff-FORCE/ENERGY
By utilizing prior information from the MD force field, our model effectively reweights the generated conformations to ensure they better adhere to the equilibrium distribution
Data
Protein conformations with force or energy labels are required to train the corresponding ConfDiff-FORCE or ConfDiff-ENERGY. We use OpenMM for energy and force evaluation of the conformation samples generated by ConfDiff-BASE
To evaluate force and energy labels using OpenMM and prepare the training data:
The output directory
/path/to/your/output_dircontains force annotation files (with the suffix *force.npy), optimized energy PDB files, and train and validation CSV files with energy labels.Training
Before training, please ensure that the pretrained representations for the training proteins have been prepared.
To train ConfDiff-FORCE:
Similarly, the ConfDiff-ENERGY model can be trained by setting
experiment=energy_guide. Detailed training configurations can be found in the fileconfigs/experiment/force_guide(energy_guide).yaml.Model Checkpoints
Access pretrained ConfDiff-FORCE/ENERGY with
ESMFoldrepresentations on different datasets.We found that using only the node representation’s on the fast-folding dataset yields better results. To train models with only node representation, set
data.repr_loader.edge_size=0Inference
To sample conformations using the ConfDiff-FORCE/ENERGY model:
Fine-tuning On ATLAS
See
datasets/atlasfor ATLAS data preparation.To fine-tune ConfDiff-BASE on ATLAS.
The models fine-tuned on ATLAS MD dataset is shown in the table below:
Performance
We benchmark model performance on following datasets: BPTI, fast-folding, Apo-holo, and Atlas. Evaluation details can be found in the
datasetsfolder and the notebooknotebooks/analysis.ipynb.ConfDiff-XXX-ClsFree refers to the ConfDiff-BASE model utilizing classifier-free guidance sampling with the ConfDiff-XXX-COND and ConfDiff-UNCOND models. As described in the paper, all results are based on ensemble sampling with varying levels of classifier-guidance strength. For the fast-folding dataset, the classifier-guidance strength values range from 0.5 to 1.0, while for other datasets, the range is 0.8 to 1.0. For BPTI and fast-folding, we also provide results from the pretrained FORCE and ENERGY models.
BPTI
The guidance strength is set to 1.5 for the FORCE model and 1.0 for the ENERGY model.
Fast-Folding
The models here utilize only the pretrained node representation. The force guidance strength is set to 2.0 for the FORCE model and 1.0 for the ENERGY model.
Apo-Holo
ATLAS
Citation
@inproceedingswang2024proteinconfdiff,title=ProteinConformationGenerationviaForce−GuidedSE(3)DiffusionModels,author=Wang,YanandWang,LihaoandShen,YuningandWang,YiqunandYuan,HuizhuoandWu,YueandGu,Quanquan,booktitle=Forty−firstInternationalConferenceonMachineLearning,year=2024