Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback

License

NotificationsYou must be signed in to change notification settings

zjunlp/MolGen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Domain-Agnostic Molecular Generation with Chemical Feedback

📃Paper • 🤗Model • 🔬Space

Pytorchlicense

🔔 News

📕 Requirements

To run the codes, You can configure dependencies by restoring our environment:

conda env create -f environment.yaml

and then:

conda activate my_env

📚 Resource Download

You can download the pre-trained and fine-tuned models via Huggingface:MolGen-large andMolGen-large-opt.

You can also download the model using the following link:https://drive.google.com/drive/folders/1Eelk_RX1I26qLa9c4SZq6Tv-AAbDXgrW?usp=sharing

Moreover, the dataset used for downstream tasks can be foundhere.

The expected structure of files is:

moldata├── checkpoint │   ├── molgen.pkl              # pre-trained model│   ├── syn_qed_model.pkl       # fine-tuned model for QED optimization on synthetic data│   ├── syn_plogp_model.pkl     # fine-tuned model for p-logP optimization on synthetic data│   ├── np_qed_model.pkl        # fine-tuned model for QED optimization on natural product data│   ├── np_plogp_model.pkl      # fine-tuned model for p-logP optimization on natural product data├── finetune│   ├── np_test.csv             # nature product test data│   ├── np_train.csv            # nature product train data│   ├── plogp_test.csv          # synthetic test data for plogp optimization│   ├── qed_test.csv            # synthetic test data for plogp optimization│   └── zinc250k.csv            # synthetic train data├── generate                    # generate molecules├── output                      # molecule candidates└── vocab_list    └── zinc.npy                # SELFIES alphabet

🚀 How to run

  • Fine-tune

    • First, preprocess the finetuning dataset by generating candidate molecules using our pre-trained model. The preprocessed data will be stored in the folderoutput.
    cd MolGen    bash preprocess.sh
    • Then utilize the self-feedback paradigm. The fine-tuned model will be stored in the foldercheckpoint.
        bash finetune.sh
  • Generate

    To generate molecules, run this script. Please specify thecheckpoint_path to determine whether to use the pre-trained model or the fine-tuned model.

    cd MolGenbash generate.sh

🥽 Experiments

We conduct experiments on well-known benchmarks to confirm MolGen's optimization capabilities, encompassing penalized logP, QED, and molecular docking properties. For detailed experimental settings and analysis, please refer to ourpaper.

  • MolGen captures real-word molecular distributions

image
  • MolGen mitigates molecular hallucinations

Targeted molecule discovery

imageimageimage

Constrained molecular optimization

image

Citation

If you use or extend our work, please cite the paper as follows:

@inproceedings{fang2023domain,author       ={Yin Fang and                  Ningyu Zhang and                  Zhuo Chen and                  Xiaohui Fan and                  Huajun Chen},title        ={Domain-Agnostic Molecular Generation with Chemical feedback},booktitle    ={{ICLR}},publisher    ={OpenReview.net},year         ={2024},url          ={https://openreview.net/pdf?id=9rPyHyjfwP}}

Star History Chart


[8]ページ先頭

©2009-2025 Movatter.jp