Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[TGRS 2025] Code for "PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images"

License

NotificationsYou must be signed in to change notification settings

Lans1ng/PointSAM

Repository files navigation


📢 Latest Updates

  • 2 Jan 2025:PointSAM has been accepted by TGRS and is now availablehere.
  • 8 Dec 2024: The complete code is released.
  • 20 Sep 2024: The arXiv version is releasedhere.

🎨 Overview

PDF Page

🎮 Getting Started

1.Install Environment

To ensure compatibility,Python version must not exceed 3.10. Follow these steps to set up your environment:

conda create --name pointsam python=3.10conda activate pointsampip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118git clone https://github.com/Lans1ng/PointSAM.gitcd PointSAMpip install -r requirements.txt

Note:The CUDA version in thepip install command is specified ascu118 (CUDA 11.8). If your system uses a different CUDA version (e.g., CUDA 12.1), replacecu118 with the appropriate version tag (e.g.,cu121).

2.Prepare Dataset

WHU Building Dataset

HRSID Dataset

NWPU VHR-10 Dataset

For convenience, the necessary JSON annotations are included in this repo. You only need to download the corresponding images. Organize your dataset as follows:

data ├── WHU│    ├── annotations│    │   ├── WHU_building_train.json│    │   ├── WHU_building_test.json│    │   └── WHU_building_val.json│    └── images│        ├── train│        │    ├── image│        │    └── label│        ├── val│        │    ├── image│        │    └── label│        └── test│             ├── image│             └── label├── HRSID│    ├── Annotations│    │   ├── all│    │   ├── inshore│    │   │      ├── inshore_test.json│    │   │      └── inshore_train.json       │    │   └── offshore│    └── Images└── NWPU     ├── Annotations     │   ├── NWPU_instnaces_train.json     │   └── NWPU_instnaces_val.json     └── Images

3.Download Checkpoints

Click the links below to download the checkpoint for the corresponding model type.

After downloading, move the models to thepretrain folder.

Note: In our project, only thevit-b model is used.

4.Training

For convenience, thescripts folder contains instructions forSupervised Training,Self-Training, andPointSAM on the NWPU VHR-10, WHU, and HRSID datasets.

Here’s an example of training PointSAM on the WHU dataset:

bash scripts/train_whu_pointsam.sh

5. Inference

Here’s an example of how to perform inference:

python inference.py --cfg <CONFIG_FILE_PATH> --out_dir <OUTPUT_DIR> --ckpt <CHECKPOINT_PATH>

Please replace<CONFIG_FILE_PATH>,<OUTPUT_DIR>, and<CHECKPOINT_PATH> with the values of the actual path.

Note: The generated results consist of four images arranged in parallel:

  • The first image is the original input image.
  • The second image is the visualization of the GT mask.
  • The third image is the result obtained by direct testing through the original SAM.
  • The fourth image is the result obtained using the provided checkpoint.

💡 Acknowledgement

🖊️ Citation

If you find this project useful in your research, please consider starring ⭐ and citing 📚:

@ARTICLE{10839471,author={Liu, Nanqing and Xu, Xun and Su, Yongyi and Zhang, Haojie and Li, Heng-Chao},journal={IEEE Transactions on Geoscience and Remote Sensing},title={PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images},year={2025},volume={63},number={},pages={1-15},doi={10.1109/TGRS.2025.3529031}}

About

[TGRS 2025] Code for "PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp