Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit42c1953

Browse files
authored
Add BaiduDisk links to datasets in README.
1 parent60932e9 commit42c1953

File tree

1 file changed

+9
-4
lines changed

1 file changed

+9
-4
lines changed

‎README.md‎

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,12 @@ We not only show the selected extremely hard samples in the test sets but also s
7070
7171
2. **Datasets preparation**
7272
73-
Download all the single train/test datasets from my [google-drive](https://drive.google.com/drive/folders/1jRJKv56QSa3gOp4w_64tVmzNMT_te-Kv?usp=sharing), or directly download the `datasets.zip` in the folder for all the data you need as following structure shows (COCO-SEG is too big, so you can download it separately). The file directory structure on my machine is as follows:
73+
Download necessary datasets:
74+
from my google-drive: [DUTS_class](https://drive.google.com/file/d/1SKaxMtIaLJk2CRdSbf-S0m6vMag1grmd/view?usp=drive_link), [COCO-9k](https://drive.google.com/file/d/1r6tRcSlvH8bXhaZD2VtGmHDxsXFl1v4z/view?usp=drive_link), [COCO-SEG](https://drive.google.com/file/d/1hkn2wP3uArctbst11XP4iKOCTa3tud5-/view?usp=drive_link), and [CoSOD_testsets](https://drive.google.com/file/d/1pTjxK4gu5kfVeR4Fdc1shZgk47FvybCe/view?usp=drive_link), or
75+
from my BaiduDisk: [DUTS_class](https://pan.baidu.com/s/1xNUaar-bzS3apJpHQED9dg?pwd=PSWD), [COCO-9k](https://pan.baidu.com/s/1AEH593Sq1XGZHhgoT4fhfg?pwd=PSWD), [COCO-SEG](https://pan.baidu.com/s/1_1VmtOHBxDKp9qq55AARXA?pwd=PSWD), and [CoSOD_testsets](https://pan.baidu.com/s/136TGYw_dh7KtVAHw6Kgknw?pwd=PSWD).
76+
The `CoSOD_testsets` contains CoCA, CoSOD3k and CoSal2015.
77+
78+
The file directory structure on my machine is as follows:
7479
7580
```
7681
+-- datasets
@@ -95,19 +100,19 @@ We not only show the selected extremely hard samples in the test sets but also s
95100
...
96101
```
97102
98-
3. **Update the paths**
103+
4. **Update the paths**
99104
100105
Replace all `/root/datasets/sod/GCoNet_plus` and `/root/codes/sod/GCoNet_plus` in this project to `/YOUR_PATH/datasets/sod/GCoNet_plus` and `/YOUR_PATH/codes/sod/GCoNet_plus`, respectively.
101106
102-
4. **Training + Test + Evaluate + Select the Best**
107+
5. **Training + Test + Evaluate + Select the Best**
103108
104109
`./gco.sh`
105110
106111
If you can apply more GPUs on the DGX cluster, you can `./sub_by_id.sh` to submit multiple times for more stable results.
107112
108113
If you have the OOM problem, plz decrease `batch_size` in `config.py`.
109114
110-
5. **Adapt the settings of modules in config.py**
115+
6. **Adapt the settings of modules in config.py**
111116
112117
You can change the weights of losses, try various *backbones* or use different *data augmentation* strategies. There are also some modules coded but not used in this work, like *adversarial training*, the *refiner* in [BASNet](https://openaccess.thecvf.com/content_CVPR_2019/papers/Qin_BASNet_Boundary-Aware_Salient_Object_Detection_CVPR_2019_paper.pdf), weighted *multiple output and supervision* used in [GCoNet](https://openaccess.thecvf.com/content/CVPR2021/papers/Fan_Group_Collaborative_Learning_for_Co-Salient_Object_Detection_CVPR_2021_paper.pdf) and [GICD](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123570443.pdf), etc.
113118

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp