Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit49b062e

Browse files
CodeCamp#139 [Feature] Support REFUGE dataset. (#2554)
## Motivation Add REFUGE datasetsOld PR:#2420---------Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
1 parent7ac0888 commit49b062e

File tree

9 files changed

+391
-54
lines changed

9 files changed

+391
-54
lines changed

‎configs/_base_/datasets/refuge.py‎

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# dataset settings
2+
dataset_type='REFUGEDataset'
3+
data_root='data/REFUGE'
4+
train_img_scale= (2056,2124)
5+
val_img_scale= (1634,1634)
6+
test_img_scale= (1634,1634)
7+
crop_size= (512,512)
8+
9+
train_pipeline= [
10+
dict(type='LoadImageFromFile'),
11+
dict(type='LoadAnnotations',reduce_zero_label=False),
12+
dict(
13+
type='RandomResize',
14+
scale=train_img_scale,
15+
ratio_range=(0.5,2.0),
16+
keep_ratio=True),
17+
dict(type='RandomCrop',crop_size=crop_size,cat_max_ratio=0.75),
18+
dict(type='RandomFlip',prob=0.5),
19+
dict(type='PhotoMetricDistortion'),
20+
dict(type='PackSegInputs')
21+
]
22+
val_pipeline= [
23+
dict(type='LoadImageFromFile'),
24+
dict(type='Resize',scale=val_img_scale,keep_ratio=True),
25+
# add loading annotation after ``Resize`` because ground truth
26+
# does not need to do resize data transform
27+
dict(type='LoadAnnotations',reduce_zero_label=False),
28+
dict(type='PackSegInputs')
29+
]
30+
test_pipeline= [
31+
dict(type='LoadImageFromFile'),
32+
dict(type='Resize',scale=test_img_scale,keep_ratio=True),
33+
# add loading annotation after ``Resize`` because ground truth
34+
# does not need to do resize data transform
35+
dict(type='LoadAnnotations',reduce_zero_label=False),
36+
dict(type='PackSegInputs')
37+
]
38+
img_ratios= [0.5,0.75,1.0,1.25,1.5,1.75]
39+
tta_pipeline= [
40+
dict(type='LoadImageFromFile',backend_args=dict(backend='local')),
41+
dict(
42+
type='TestTimeAug',
43+
transforms=[
44+
[
45+
dict(type='Resize',scale_factor=r,keep_ratio=True)
46+
forrinimg_ratios
47+
],
48+
[
49+
dict(type='RandomFlip',prob=0.,direction='horizontal'),
50+
dict(type='RandomFlip',prob=1.,direction='horizontal')
51+
], [dict(type='LoadAnnotations')], [dict(type='PackSegInputs')]
52+
])
53+
]
54+
train_dataloader=dict(
55+
batch_size=4,
56+
num_workers=4,
57+
persistent_workers=True,
58+
sampler=dict(type='InfiniteSampler',shuffle=True),
59+
dataset=dict(
60+
type=dataset_type,
61+
data_root=data_root,
62+
data_prefix=dict(
63+
img_path='images/training',seg_map_path='annotations/training'),
64+
pipeline=train_pipeline))
65+
val_dataloader=dict(
66+
batch_size=1,
67+
num_workers=4,
68+
persistent_workers=True,
69+
sampler=dict(type='DefaultSampler',shuffle=False),
70+
dataset=dict(
71+
type=dataset_type,
72+
data_root=data_root,
73+
data_prefix=dict(
74+
img_path='images/validation',
75+
seg_map_path='annotations/validation'),
76+
pipeline=val_pipeline))
77+
test_dataloader=dict(
78+
batch_size=1,
79+
num_workers=4,
80+
persistent_workers=True,
81+
sampler=dict(type='DefaultSampler',shuffle=False),
82+
dataset=dict(
83+
type=dataset_type,
84+
data_root=data_root,
85+
data_prefix=dict(
86+
img_path='images/test',seg_map_path='annotations/test'),
87+
pipeline=val_pipeline))
88+
89+
val_evaluator=dict(type='IoUMetric',iou_metrics=['mDice'])
90+
test_evaluator=val_evaluator

‎docs/en/user_guides/2_dataset_prepare.md‎

Lines changed: 52 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -145,6 +145,15 @@ mmsegmentation
145145
│ │ ├── ann_dir
146146
│ │ │ ├── train
147147
│ │ │ ├── val
148+
│ ├── REFUGE
149+
│ │ ├── images
150+
│ │ │ ├── training
151+
│ │ │ ├── validation
152+
│ │ │ ├── test
153+
│ │ ├── annotations
154+
│ │ │ ├── training
155+
│ │ │ ├── validation
156+
│ │ │ ├── test
148157
```
149158

150159
###Cityscapes
@@ -330,7 +339,7 @@ For Potsdam dataset, please run the following command to download and re-organiz
330339
python tools/dataset_converters/potsdam.py /path/to/potsdam
331340
```
332341

333-
In our default setting, it will generate3,456 images for training and2,016 images for validation.
342+
In our default setting, it will generate3456 images for training and2016 images for validation.
334343

335344
###ISPRS Vaihingen
336345

@@ -383,7 +392,7 @@ You may need to follow the following structure for dataset preparation after dow
383392
python tools/dataset_converters/isaid.py /path/to/iSAID
384393
```
385394

386-
In our default setting (`patch_width`=896,`patch_height`=896, `overlap_area`=384), it will generate33,978 images for training and11,644 images for validation.
395+
In our default setting (`patch_width`=896,`patch_height`=896, `overlap_area`=384), it will generate33978 images for training and11644 images for validation.
387396

388397
##LIP(Look Into Person) dataset
389398

@@ -436,7 +445,7 @@ cd ./RawData/Training
436445

437446
Then create`train.txt` and`val.txt` to split dataset.
438447

439-
According toTransUNet, the following is the data set division.
448+
According toTransUnet, the following is the data set division.
440449

441450
train.txt
442451

@@ -500,7 +509,45 @@ Then, use this command to convert synapse dataset.
500509
python tools/dataset_converters/synapse.py --dataset-path /path/to/synapse
501510
```
502511

503-
In our default setting, it will generate 2,211 2D images for training and 1,568 2D images for validation.
504-
505512
Noted that MMSegmentation default evaluation metric (such as mean dice value) is calculated on 2D slice image,
506513
which is not comparable to results of 3D scan in some paper such as[TransUNet](https://arxiv.org/abs/2102.04306).
514+
515+
###REFUGE
516+
517+
Register in[REFUGE Challenge](https://refuge.grand-challenge.org) and download[REFUGE dataset](https://refuge.grand-challenge.org/REFUGE2Download).
518+
519+
Then, unzip`REFUGE2.zip` and the contents of original datasets include:
520+
521+
```none
522+
├── REFUGE2
523+
│ ├── REFUGE2
524+
│ │ ├── Annotation-Training400.zip
525+
│ │ ├── REFUGE-Test400.zip
526+
│ │ ├── REFUGE-Test-GT.zip
527+
│ │ ├── REFUGE-Training400.zip
528+
│ │ ├── REFUGE-Validation400.zip
529+
│ │ ├── REFUGE-Validation400-GT.zip
530+
│ ├── __MACOSX
531+
```
532+
533+
Please run the following command to convert REFUGE dataset:
534+
535+
```shell
536+
python tools/convert_datasets/refuge.py --raw_data_root=/path/to/refuge/REFUGE2/REFUGE2
537+
```
538+
539+
The script will make directory structure below:
540+
541+
```none
542+
│ ├── REFUGE
543+
│ │ ├── images
544+
│ │ │ ├── training
545+
│ │ │ ├── validation
546+
│ │ │ ├── test
547+
│ │ ├── annotations
548+
│ │ │ ├── training
549+
│ │ │ ├── validation
550+
│ │ │ ├── test
551+
```
552+
553+
It includes 400 images for training, 400 images for validation and 400 images for testing which is the same as REFUGE 2018 dataset.

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp