Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Bilateral Weighted Adaptive Local Similarity Measure for Registration in Neurosurgery

  • Conference paper
  • First Online:

Abstract

Image-guided neurosurgery involves the display of MRI-based preoperative plans in an intraoperative reference frame. Interventional MRI (iMRI) can serve as a reference for non-rigid registration based propagation of preoperative MRI. Structural MRI images exhibit spatially varying intensity relationships, which can be captured by a local similarity measure such as the local normalized correlation coefficient (LNCC). However, LNCC weights local neighborhoods using a static spatial kernel and includes voxels from beyond a tissue or resection boundary in a neighborhood centered inside the boundary. We modify LNCC to use locally adaptive weighting inspired by bilateral filtering and evaluate it extensively in a numerical phantom study, a clinical iMRI study and a segmentation propagation study. The modified measure enables increased registration accuracy near tissue and resection boundaries.

You have full access to this open access chapter, Download conference paper PDF

Similar content being viewed by others

Keywords

1Introduction

Image-guided neurosurgery involves the display of preoperative anatomy and surgical plans in intraoperative reference frame to increase the accuracy of pathological tissue resection and to reduce damage to the surrounding structures. Preoperative MRI can reveal information such as nerve fiber tracts and brain activation areas. Interventional MRI (iMRI) can image intraoperative deformations due to cerebrospinal fluid (CSF) drainage, gravity and edema (collectively,brain shift) [9]. Non-rigid registration of preoperative MRI to intraoperative iMRI enables surgical guidance using propagated preoperative plans [2].

Correspondences missing due to resection present a challenge to the registration. Daga et al. [2] estimated brain shift intraoperatively by masking out voxels lying outside a brain mask. However, automated brain extraction such as using FSL-BET [11] can be inaccurate near the resection cavity due to fluid accumulation and surgical gauze in the cavity. Another challenge to registration arises from contrast changes due to CSF drainage, bleeding, edema, MRI bias field and low signal to noise ratio (SNR) of iMRI.

We consider registration of a T1-weighted (T1w) image pair. The local normalized correlation coefficient (LNCC, [1]) captures a local affine intensity relationship. LNCC involves smoothing based on convolution with a Gaussian kernel, which includes voxels located outside a tissue or resection boundary in the statistics of a local neighborhood centered inside the boundary. This potentially reduces the matching specificity near the resection margin.

Thebilateral filter was introduced for edge-preserving image smoothing and weights the voxels in the local neighborhood based on their spatial distance and intensity difference from the central voxel [13]. Bilateral filtering was used for locally adaptive patch-based similarity cost evaluation in a stereo reconstruction problem [14]: as pixels on a surface tended to have similar colors, the estimated disparity map became more accurate. In a T1w image pair, the voxels from the same tissue tend to have similar intensities and we suggest that bilateral weighting can lead to more accurate brain shift estimation.

We propose to introduce adaptive bilateral weighting into LNCC calculation as illustrated in Fig. 1. We evaluated the modified measure in registration experiments on three datasets and found an improvement in registration accuracy.

2Methods

2.1Bilateral Adaptively Weighted LNCC Similarity

LNCC was first used in the context of image registration by [1]. LetR be the reference image andF the floating image in the same coordinate space, then LNCC for the local neighborhood of a point\({\varvec{v}}\) is defined as

$$\begin{aligned} {\mathrm {LNCC}_{{\varvec{v}}}(R,F)}^2 = \frac{ {\langle {R,F}\rangle _{{\varvec{v}}}}^2 }{ \langle {R,R}\rangle _{{\varvec{v}}} \cdot \langle {F,F}\rangle _{{\varvec{v}}}}, \end{aligned}$$
(1)

where the\(\langle {R,R}\rangle _{{\varvec{v}}}\) and\(\langle {F,F}\rangle _{{\varvec{v}}}\) are the local variances and\(\langle {R,F}\rangle _{{\varvec{v}}}\) is the local covariance. The latter is defined as\( \langle {R,F}\rangle _{{\varvec{v}}} = \overline{R \cdot F}_{{\varvec{v}}} - \overline{R}_{{\varvec{v}}} \cdot \overline{F}_{{\varvec{v}}}, \) where\(\overline{R}_{{\varvec{v}}}\) and\(\overline{F}_{{\varvec{v}}}\) are the respective local means. The local variances are defined analogously. The local mean forR is defined as\( \overline{R}_{{\varvec{v}}} = \frac{1}{N} \sum _{{\varvec{x}}} { R({\varvec{v}}-{\varvec{x}}) w_{{\varvec{v}}}({\varvec{x}}) }, \) whereN is the number of voxels in the neighborhood of\({\varvec{v}}\),\({\varvec{x}}\) is the offset relative to\({\varvec{v}}\) and\(w_{{\varvec{v}}}({\varvec{x}})\) are the weights, here given by a generic term that depends on\({\varvec{v}}\). The local mean forF is defined analogously.

LNCC uses a Gaussian kernel for the local weights,\( w_{{\varvec{v}}}({\varvec{x}})= G_{\beta }({\varvec{x}})= \frac{1}{\sqrt{2\pi }\beta } \mathrm {exp} \left( {-\frac{{|{\varvec{x}}|}^2}{2\beta ^2}} \right) \!, \) where\(\beta \) controls the neighborhood’s size (negligible for\(|{\varvec{x}}| > 3\beta \)). Since\(G_{\beta }({\varvec{x}})\) does not depend on\({\varvec{v}}\), the local mean can be implemented using convolution\( \overline{I}_{{\varvec{v}}} = \left( G_{\beta } *I \right) ({\varvec{v}}) \).

Fig. 1.
figure 1

(a) T1-weighted reference image and (b) intensity-based weights for a point (blue cross). (c) T1-weighted floating image and (d) intensity-based weights for the point. (e) Final weights based on (b), (d) and distance from the point. (Color figure online)

We introduce bilateral adaptive weighting and refer to the modified measure as LNCC-AW. A bilateral filtered smoothing of an arbitrary imageI is

$$\begin{aligned} \overline{I}_{{\varvec{v}}}^{\mathrm {bilat.}} = \frac{1}{N} \sum \limits _{{\varvec{x}}} { I({\varvec{v}}-{\varvec{x}}) \cdot G_\beta ({\varvec{x}}) \cdot G_\alpha \left( I({\varvec{v}}-{\varvec{x}}) - I({\varvec{v}}) \right) }, \end{aligned}$$
(2)

where\(G_{\alpha }(d)\) is arange kernel i.e. a kernel for the intensity difference\(d=I({\varvec{v}}-{\varvec{x}})-I({\varvec{v}})\). The edge-preserving property arises as the voxels beyond an intensity rise/drop are excluded. Given imagesR andF to register, we guide the adaptive weighting by both the images as in [14] by using the composite term

$$\begin{aligned} w_{{\varvec{v}}}({\varvec{x}}) = G_\beta ({\varvec{x}}) \cdot G_\alpha \left( R({\varvec{v}}-{\varvec{x}})-R({\varvec{v}}) \right) \cdot G_\alpha \left( F({\varvec{v}}-{\varvec{x}})-F({\varvec{v}}) \right) \end{aligned}$$
(3)

as illustrated in Fig. 1. Since\(w_{{\varvec{v}}}({\varvec{x}})\) vary spatially, we can no longer implement the local mean using convolution nor take advantage of kernel separability.

The low SNR of iMRI can potentially cause the weights of adjacent neighborhoods in homogeneous areas to vary along with the varying intensity of the central voxels. In order to reduce spatial inconsistencies in similarity values, we replace the Gaussian range kernel, as used in [13,14], with a kernel shaped as Student’st-distribution, which down-weights rather than suppresses differing intensities:

$$\begin{aligned} G_\alpha (d)=\frac{ \mathrm {\Gamma }(\frac{\nu +1}{2})}{\sqrt{\nu \pi \alpha ^2} \mathrm {\Gamma }(\frac{\nu }{2})} {\left( 1+\frac{d^2}{\nu \alpha ^2} \right) }^{-\frac{\nu +1}{2} }. \end{aligned}$$
(4)

We selected\(\nu =2\) as it has a gradual drop-off and provides a trade-off between boundary-preservation and robustness to noise. For\(\alpha =\infty \), the weighting reduces to locally non-varying.

2.2Registration Using a Discrete Optimization Framework

The derivation of analytical gradient of the similarity measure, for instance with respect to a voxel-based deformation field, for use in gradient-based non-rigid registration schemes becomes complicated when using adaptive weighting, because the gradient depends on the local weights which in turn depend on the deformation. However, [4] reformulated non-rigid registration as a discrete Markov Random Field (MRF) optimization problem, for which the similarity measure gradient is not needed. We employ the proposed measure in a related discrete optimization scheme of [6]. A grid\(\mathcal {P}\) of B-spline transformation control points\(p\in \mathcal {P}\) with positions\({\varvec{c}}_p\) is overlaid onto the reference image. The control point displacements in the floating image are\({\varvec{u}}_p=[u_p,v_p,w_p]\) with discrete valued components. For efficiency, a minimum spanning tree\(\mathcal {N}\) of the most relevant edges\((p,q)\in \mathcal {N}\) is optimized rather than a full MRF. Displacements are sought minimizing the energy

$$\begin{aligned} \sum \limits _{p \in \mathcal {P}}{\left( 1-\Vert \text {LNCC}_{{\varvec{c}}_{\varvec{p}}}(R(\varvec{\xi }),F(\varvec{\xi }+{\varvec{u}}_{\varvec{p}}))\Vert \right) } +\alpha \sum \limits _{(p,q) \in \mathcal {N}}{\frac{{\Vert {\varvec{u}}_{\varvec{p}}-{\varvec{u}}_{\varvec{q}}\Vert }^2}{\Vert {\varvec{x}}_{\varvec{p}}-{\varvec{x}}_{\varvec{q}}\Vert }.} \end{aligned}$$
(5)

3Experiments

3.1Patch Matching on 2D Synthetic Phantom

We compare matching accuracy for two 2D synthetic phantoms. We place a fixed patch representing a local neighborhood in the reference image and a moving patch in the floating image to plot the similarity profile of LNCC and LNCC-AW, respectively. We assess two phantoms. Acontrast-enhanced lesion near a resection phantom is shown in Fig. 2(a–d). The similarity profile for LNCC has a mild maximum at the true zero displacement due to voxels included from the resected area. The similarity profile for LNCC-AW has a clear maximum due voxels down-weighted in the resected area. A phantom of themedial longitudinal fissure is shown in Fig. 2(e–h). The reference and floating image are the same axial slice from the BrainWeb database. The patch is centered next to the medial longitudinal fissure that contains dark voxels in the CSF and the falx cerebri. The similarity profile for LNCC has a band of false matches due to voxels included in the fissure. The similarity profile for LNCC-AW has a unique maximum at the true zero displacement due to these voxels being down-weighted.

Fig. 2.
figure 2

2D numerical phantoms. (a–d) A contrast-enhanced lesion near a resection. (e–h) Medial longitudinal fissure. (a, e) Reference image. The outline shows the fixed patch. (b, f) Floating image. The inner and outer outline show the moving patch at zero and maximum displacement, respectively. (c, g) Similarity profile of LNCC as a function of displacement. (d, h) Same for LNCC-AW.

3.2Recovery of a 3D Synthetic Deformation

We perform a registration experiment on a BrainWeb dataset. The reference image is made by inserting a synthetic resection cavity in the right temporal lobe. The floating image is resampled using B-spline interpolation from the BrainWeb image using a synthetic sinusoidal deforming field (period 100 mm in all directions, displacement amplitude 4 mm). The voxel intensities are normalized to 0–1 range. We use 5 discrete registration grid levels with grid spacing (7, 6, 5, 4 and 3 voxels), search radius (6, 5, 4, 3 and 2 control point grid spacings) and discretization step (5, 4, 3, 2 and 1 voxels). The floating image is updated between levels using B-spline interpolation. We run the scheme for LNCC (\(\beta = 5\,\mathrm {mm}\)) and twice for LNCC-AW (\(\beta =5\,\mathrm {mm},\alpha = 0.30\) and\(\alpha = 0.10\)).

We quantify registration accuracy using landmarks found in the reference using 3D-SIFT [12]. We include 43 landmarks from a 2 cm region from the resection margin. We propagate the landmarks using the true and recovered deformations. The target registration error (TRE) is shown in Fig. 3(c). TRE for LNCC-AW is significantly lower (for both\(\alpha = 0.30\) and\(\alpha = 0.10\)) than for LNCC (pairedt-tests,\(p<0.001\)). The log of Jacobian determinant maps for true and recovered deformations are shown in Fig. 3(d–g). The deformations recovered using LNCC-AW follow the true deformation closer than using LNCC.

Fig. 3.
figure 3

Axial view of 3D BrainWeb based phantom. (a) Reference image (inserted resection). (b) Floating image (synthetic deformation). (c) Target registration error. (d) Map of log Jacobian determinant for ground truth deformation (forward field). (e) Same map for fields recovered using LNCC, (f) LNCC-AW with\(\alpha =0.30\) and (g) LNCC-AW with\(\alpha =0.10\).

3.3Evaluation on an iMRI Surgical Dataset

We validate the measure on 12 cases of anterior temporal lobe resection. The dataset is described in [2]. We skull-strip the pre- and the intraoperative image, normalize the 1st–99th intensity percentile linearly to the range 0–1, crop the intraoperative image to only contain the brain, resample the intraoperative image to a resolution\(1.1\times 1.1\times 1.1\) mm and register the preoperative image affinely to the intraoperative reference [8]. We use the bilateral filter as per Eq. 2 on the reference and floating image pair in order to generate a guidance image pair using settings (\(\beta = 2.2\) mm,\(\alpha =0.03\)) that we found to produce a mild smoothing in homogeneous areas whilst preserving edges. We perform a non-rigid registration for LNCC (\(\beta = 5.5\,\mathrm {mm}\)) and two non-rigid registrations for LNCC-AW (\(\beta = 5.5\,\mathrm {mm},\alpha = 0.30\) and\(\alpha = 0.10\)), using the guidance image pair to construct the weights in Eq. 3. The discrete optimization parameters are identical as in Sect. 3.2 (in voxels). The registration takes approx. 10 h per subject using 4 threads on a computing cluster node.

For each case, we annotate 50–60 landmarks pairs in the pre/intraoperative image a few cm from the resection margin. We propagate the landmarks using the recovered deformations. The mean TRE for all cases is shown in Fig. 4(a) and is significantly lower for registrations based on LNCC-AW with\(\alpha = 0.30\) (pairedt-test,\(p = 0.0236\)) and LNCC-AW with\(\alpha = 0.10\) (\(p = 0.0054\)), respectively, than for registrations based on LNCC. The effect size is below the image resolution, potentially as few reliably identifiable landmark pairs exist near the resection margin. We evaluate the smoothness of the recovered deformations and assess the absolute log Jacobian determinant maps in a region of interest (ROI) in the brain less than 2 cm from the base of the resection cavity (located in iMRI). The means within the ROI are shown in Fig. 4(b) and are significantly lower for LNCC-AW with\(\alpha = 0.30\) (pairedt-test,\(p = 0.0133\)) and for LNCC-AW with\(\alpha = 0.10\) (\(p<0.001\)), respectively, than for LNCC.

Fig. 4.
figure 4

Registration results for 12 iMRI cases. (a) Target registration error. (b) Mean (in vicinity of the resection) of abs. log Jacobian determinant map.

3.4Segmentation Propagation Experiment

We explore how the adaptive weighting affects registration accuracy for brain structures. We use a database of 35 T1w scans with parcellations of 140 key structures provided by Neuromorphometrics for the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas LabelingFootnote1. We normalize image intensities and use each image as a reference image and the remaining images as floating images. For each of the 1190 image pairs, we perform affine registration and non-rigid registrations using LNCC (\(\beta =5\,\mathrm {mm}\)) and LNCC-AW (\(\beta =5\,\mathrm {mm}, \alpha =0.10\) only) using discrete registration parameters as above. We propagate the floating image segmentations using nearest-neighbor interpolation and calculate Dice score for each label. The average Dice score for 1190 affine registration image pairs is\(0.422\,\pm \,0.00187\), for LNCC based non-rigid registrations it is\(0.517\,\pm \,0.0101\) and for LNCC-AW based registrations it is\(0.526 \pm 0.00947\). Average Dice score is significantly higher when using LNCC-AW than LNCC (\(p<10^{-6}\)).

4Discussion and Conclusion

We introduced bilateral adaptive weighting into a local similarity measure (LNCC). The modification facilitated a more accurate landmark localization in several T1w registration experiments. In a study on clinical iMRI data, we recovered a smoother deformation near the resection margin, which is biomechanically more plausible and potentially enables more accurate guidance near the resection margin. The brain shift we assessed arose from CSF leakage and postural drainage, but in principle our approach can improve accuracy near distinct intensity edges at margins of tumors, collapsed cysts or haematomas from bleeding into the brain, which should be confirmed in a future study.

Unoptimized bilateral weighting introduces a time bottleneck that precludes intraoperative application. We note that the discrete optimization steps collectively take approx. one minute. However, powerful options are open toward optimizing the bilateral weighting, such as guided image filtering [5]. The proposed method could be extended to a multi-channel local similarity measure such as LCCA [7]. A related approach to ours is to constrain the deforming field using bilateral filtering [10] and a unified scheme should be investigated. The analytical gradient could potentially be derived using the approach of [3].

References

  1. Cachier, P., Bardinet, E., Dormont, D., Pennec, X., Ayache, N.: Iconic feature based nonrigid registration: the PASHA algorithm. Comput. Vis. Image Underst.89(2), 272–298 (2003)

    Article MATH  Google Scholar 

  2. Daga, P., Winston, G., Modat, M., White, M., Mancini, L., Cardoso, M.J., Symms, M., Stretton, J., McEvoy, A.W., Thornton, J., Micallef, C., Yousry, T., Hawkes, D.J., Duncan, J.S., Ourselin, S.: Accurate localization of optic radiation during neurosurgery in an interventional MRI suite. IEEE Trans. Med. Imaging31(4), 882–891 (2012)

    Article  Google Scholar 

  3. Darkner, S., Sporring, J.: Locally orderless registration. IEEE Trans. Pattern Anal. Mach. Intell.35(6), 1437–1450 (2013)

    Article  Google Scholar 

  4. Glocker, B., Komodakis, N., Tziritas, G., Navab, N., Paragios, N.: Dense image registration through MRFs and efficient linear programming. Med. Image Anal.12(6), 731–741 (2008)

    Article  Google Scholar 

  5. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell.35(6), 1397–1409 (2013)

    Article  Google Scholar 

  6. Heinrich, H., Jenkinson, M., Brady, M., Schnabel, J.: MRF-based deformable registration and ventilation estimation of lung CT. IEEE Trans. Med. Imaging32(7), 1239–1248 (2013)

    Article  Google Scholar 

  7. Heinrich, M.P., Papież, B.W., Schnabel, J.A., Handels, H.: Multispectral image registration based on local canonical correlation analysis. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8673, pp. 202–209. Springer, Heidelberg (2014). doi:10.1007/978-3-319-10404-1_26

    Chapter  Google Scholar 

  8. Modat, M., Cash, D.M., Daga, P., Winston, G.P., Duncan, J.S., Ourselin, S.: Global image registration using a symmetric block-matching approach. J. Med. Imaging1(2), 024003–024003 (2014)

    Article  Google Scholar 

  9. Nimsky, C., Ganslandt, O., Cerny, S., Hastreiter, P., Greiner, G., Fahlbusch, R.: Quantification of, visualization of, and compensation for brain shift using intraoperative magnetic resonance imaging. Neurosurgery47(5), 1070–1080 (2000)

    Article  Google Scholar 

  10. Papież, B.W., Heinrich, M.P., Fehrenbach, J., Risser, L., Schnabel, J.A.: An implicit sliding-motion preserving regularisation via bilateral filtering for deformable image registration. Med. Image Anal.18(8), 1299–1311 (2014)

    Article  Google Scholar 

  11. Smith, S.M.: Fast robust automated brain extraction. Hum. Brain Mapp.17(3), 143–155 (2002)

    Article  Google Scholar 

  12. Toews, M., Wells, W.M.: Efficient and robust model-to-image alignment using 3D scale-invariant features. Med. Image Anal.17(3), 271–282 (2013)

    Article  Google Scholar 

  13. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision, 1998, pp. 839–846. IEEE (1998)

    Google Scholar 

  14. Yoon, K.J., Kweon, I.S.: Adaptive support-weight approach for correspondence search. IEEE Trans. Pattern Anal. Mach. Intell.28(4), 650–656 (2006)

    Article  Google Scholar 

Download references

Acknowledgments

This work was part funded by the Wellcome Trust (WT101957, WT106882, 201080/Z/16/Z), the Engineering and Physical Sciences Research Council (EPSRC grants EP/N013220/1, EP/N022750/1, EP/N027078/1, NS/A000027/1) and the National Institute for Health Research University College London Hospitals Biomedical Research Centre (NIHR BRC UCLH/UCL High Impact Initiative). MK is supported by the UCL Doctoral Training Programme in Medical and Biomedical Imaging studentship funded by the EPSRC (EP/K502959/1). MM is supported by the UCL Leonard Wolfson Experimental Neurology Centre (PR/ylr/18575) and received further funding from Alzheimer’s Society (AS-PG-15-025). GPW is supported by MRC Clinician Scientist Fellowship (MR/M00841X/1). DS receives further funding from the EU-Horizon2020 project EndoVESPA (H2020-ICT-2015-688592).

Author information

Authors and Affiliations

  1. Centre for Medical Image Computing, University College London, London, UK

    Martin Kochan, Marc Modat, Tom Vercauteren, Mark White, Sébastien Ourselin & Danail Stoyanov

  2. National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, London, UK

    Mark White, Laura Mancini, Andrew W. McEvoy, John S. Thornton & Tarek Yousry

  3. Dementia Research Centre, Institute of Neurology, University College London, London, UK

    Marc Modat & Sébastien Ourselin

  4. Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK

    Gavin P. Winston & John S. Duncan

  5. Epilepsy Society MRI Unit, Chesham Lane, Chalfont St Peter, UK

    Gavin P. Winston

Authors
  1. Martin Kochan

    You can also search for this author inPubMed Google Scholar

  2. Marc Modat

    You can also search for this author inPubMed Google Scholar

  3. Tom Vercauteren

    You can also search for this author inPubMed Google Scholar

  4. Mark White

    You can also search for this author inPubMed Google Scholar

  5. Laura Mancini

    You can also search for this author inPubMed Google Scholar

  6. Gavin P. Winston

    You can also search for this author inPubMed Google Scholar

  7. Andrew W. McEvoy

    You can also search for this author inPubMed Google Scholar

  8. John S. Thornton

    You can also search for this author inPubMed Google Scholar

  9. Tarek Yousry

    You can also search for this author inPubMed Google Scholar

  10. John S. Duncan

    You can also search for this author inPubMed Google Scholar

  11. Sébastien Ourselin

    You can also search for this author inPubMed Google Scholar

  12. Danail Stoyanov

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toMartin Kochan.

Editor information

Editors and Affiliations

  1. University College London, London, UK

    Sebastien Ourselin

  2. The Hebrew University of Jerusalem, Jerusalem, Israel

    Leo Joskowicz

  3. Harvard Medical School, Boston, MA, USA

    Mert R. Sabuncu

  4. Istanbul Technical University, Istanbul, Türkiye

    Gozde Unal

  5. Harvard Medical School, Boston, MA, USA

    William Wells

Rights and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Kochan, M.et al. (2016). Bilateral Weighted Adaptive Local Similarity Measure for Registration in Neurosurgery. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds) Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. MICCAI 2016. Lecture Notes in Computer Science(), vol 9902. Springer, Cham. https://doi.org/10.1007/978-3-319-46726-9_10

Download citation

Publish with us

Societies and partnerships


[8]ページ先頭

©2009-2025 Movatter.jp