Movatterモバイル変換


[0]ホーム

URL:


CN104239877A - Image processing method and image acquisition device - Google Patents

Image processing method and image acquisition device
Download PDF

Info

Publication number
CN104239877A
CN104239877ACN201310244619.2ACN201310244619ACN104239877ACN 104239877 ACN104239877 ACN 104239877ACN 201310244619 ACN201310244619 ACN 201310244619ACN 104239877 ACN104239877 ACN 104239877A
Authority
CN
China
Prior art keywords
realtime graphic
image capture
capture device
image
acquisition target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310244619.2A
Other languages
Chinese (zh)
Other versions
CN104239877B (en
Inventor
侯欣如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing LtdfiledCriticalLenovo Beijing Ltd
Priority to CN201310244619.2ApriorityCriticalpatent/CN104239877B/en
Publication of CN104239877ApublicationCriticalpatent/CN104239877A/en
Application grantedgrantedCritical
Publication of CN104239877BpublicationCriticalpatent/CN104239877B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The embodiment of the invention provides an image processing method and an image acquisition device. The image processing method is applied to the image acquisition device and comprises the steps that a first scene is acquired, and an acquired real-time image is displayed; whether an operating body appears in the real-time image is determined; when it is determined that the operating body appears in the real-time image, an acquired object, corresponding to the operating body in position, in the real-time image is taken as a target object; the target area of the target object in the real-time image is determined; image processing is conducted on the target area.

Description

The method of image procossing and image capture device
Technical field
The present invention relates to and be a kind ofly applied to the method for the image procossing of image capture device and corresponding image capture device.
Background technology
In recent years, such as the electronic equipment of notebook, tablet computer, smart mobile phone, camera and portable media player and so on is widely used.These electronic equipments generally include the image acquisition component of such as camera and so on, so that the image acquisition operation that user such as can take pictures easily, to photograph and so on.But, when user carries out image acquisition, the situation occurring undesirable subject in the image of shooting often can be run into.Such as, in tourist attractions, when user wishes to carry out taking pictures souvenir for its kith and kin, be usually difficult to avoid other unacquainted strangers, and when the angle avoiding stranger is taken pictures, intactly may not take again the sight spot that user wishes.
Summary of the invention
The object of the embodiment of the present invention is the method and the image capture device that provide a kind of image procossing, to solve the problem.
Embodiments provide a kind of method of image procossing, be applied to image capture device.Described method comprises: gather the first scene, and shows the realtime graphic gathered; Determine whether occurred operating body in realtime graphic; When determining to have occurred operating body in realtime graphic, using acquisition target corresponding with the position of operating body in realtime graphic as destination object; Determine the target area of destination object in realtime graphic; Image procossing is carried out to target area.
Another embodiment of the present invention provides a kind of image capture device, comprising: collecting unit, and configuration gathers the first scene; Display unit, configuration shows gathered realtime graphic; Operating body recognition unit, configuration determines whether occurred operating body in realtime graphic; Object determining unit, configuration comes when determining to have occurred operating body in realtime graphic, using acquisition target corresponding with the position of operating body in realtime graphic as destination object; Area determination unit, the target area of destination object in realtime graphic is determined in configuration; And graphics processing unit, configuration carries out image procossing to target area.
In the scheme that the invention described above embodiment provides, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in the description of embodiment below.Accompanying drawing in the following describes is only exemplary embodiment of the present invention.
Fig. 1 depicts the process flow diagram of the method for the image procossing according to the embodiment of the present invention.
Fig. 2 a to Fig. 2 c shows according to an example of the present invention, determines the key diagram of the illustrative case of destination object in realtime graphic.
Fig. 3 a shows according to an example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic, target area is carried out to the key diagram of the illustrative case of image procossing.
Fig. 3 b shows according to another example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic, target area is carried out to the key diagram of the illustrative case of image procossing.
Fig. 4 is the exemplary block diagram of the image capture device illustrated according to the embodiment of the present invention.
Fig. 5 shows the key diagram that the image capture device shown in Fig. 4 is an illustrative case of spectacle image capture device.
Fig. 6 shows the block diagram according to the display unit in image capture device.
Fig. 7 shows the key diagram of a signal situation of the display unit shown in Fig. 6.
Embodiment
Hereinafter, the preferred embodiments of the present invention are described in detail with reference to accompanying drawing.Note, in the present description and drawings, there is substantially the same step and represent with the identical Reference numeral of element, and will be omitted the repetition of explanation of these steps and element.
In following examples of the present invention, the concrete form of image capture device includes but not limited to intelligent mobile phone, personal computer, personal digital assistant, portable computer, tablet computer, multimedia player etc.According to an example of the present invention, image capture device can be hand-hold electronic equipments.Preferably, according to another example of the present invention, image capture device can be wear electronic equipment.In addition, according to another example of the present invention, image capture device can comprise lens component and the collecting unit that arrange corresponding to described lens component and display unit.
Fig. 1 depicts the process flow diagram of the method 100 according to the image procossing of the embodiment of the present invention.Below, the method for the image procossing according to the embodiment of the present invention is described with reference to Fig. 1.The method 100 of image procossing can be used for above-mentioned image capture device.
As shown in Figure 1, in step S101, the first scene is gathered, and show the realtime graphic gathered.As mentioned above, according to an example of the present invention, image capture device can comprise lens component and the collecting unit that arrange corresponding to described lens component and display unit.In the case, in step S101, when lens component is arranged in the viewing area of user, by collecting unit, the first scene that user's transmitting lens chip part is watched is gathered, and obtain realtime graphic.Such as, lens component can be arranged on above or below collecting unit.Therefore, when lens component is arranged in the viewing area of user, collecting unit can carry out image acquisition along the direction similar with user's view direction, thus obtains the real-time image acquisition of user's transmitting lens chip part viewing.
In step s 102, determine whether occurred operating body in realtime graphic.According to an example of the present invention, operating body can be the finger of user.Alternatively, operating body also can be the operating pen etc. pre-set.
When determining to have occurred operating body in realtime graphic, in step s 103, using acquisition target corresponding with the position of operating body in realtime graphic as destination object.According to an example of the present invention, in step s 103, first can identify the acquisition target in realtime graphic, and obtain each acquisition target primary importance in realtime graphic.Then, in identified acquisition target, determine that its primary importance first acquisition target corresponding with the position of operating body is as destination object.
Selectively, after determining the acquisition target that its primary importance is corresponding with the position of operating body, in step s 103, also can determine the first distance between object corresponding with the first acquisition target in the first scene and image capture device, and determine the second distance between object corresponding with the acquisition target except the first acquisition target in the first scene and image capture device, then in the object of the first scene, determine that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference, finally using the acquisition target corresponding to target object as destination object.In example according to the present invention, the object in the first scene can be do not have lived object, such as buildings, trees etc.In addition, the object in the first scene also can be lived object, such as people, animal etc.
Fig. 2 a to Fig. 2 c shows according to an example of the present invention, determines the key diagram of the illustrative case of destination object in realtime graphic.In the example shown in Fig. 2 a to Fig. 2 c, show the realtime graphic 200 of the first scene gathered.As shown in Figure 2 a, when determining to have occurred operating body 210 in realtime graphic 200, first identifying the acquisition target 221,222,223 and 224 in realtime graphic, and obtaining acquisition target 221,222,223 and 224 primary importance in realtime graphic.Such as, contours extract can be performed to realtime graphic 200, thus obtain the profile of each acquisition target, thus determine each acquisition target and position thereof.
Then, as shown in dotted line frame in Fig. 2 b, in realtime graphic 200, determine that its primary importance acquisition target 222 corresponding with the position of operating body 210 is as destination object.In addition, as shown in the dotted line frame in Fig. 2 c, determine the first distance between object corresponding with the first acquisition target 222 in the first scene and image capture device, and determine in the first scene with the acquisition target 221 except the first acquisition target 222, second distance between 223 and 224 corresponding objects and image capture device.Then the first scene object (namely, object corresponding to object 221,222,223 and 224) in, determine that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference, finally using the acquisition target 223 and 224 corresponding to target object also as destination object.Thus user once can specify in the multiple destination objects in realtime graphic, and do not need to specify destination object one by one, facilitate the operation of user.
Return Fig. 1, in step S104, determine the target area of destination object in realtime graphic.Then, in step S105, image procossing is carried out to target area.Fig. 3 a shows according to an example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic 200, target area is carried out to the key diagram of the illustrative case of image procossing.As shown in Figure 3 a, according to an example of the present invention, in step S105, regulate the transparency of determined target area 311,312 and 313, so that virtualization process can be carried out to target area 311,312 and 313, thus virtualization destination object 222,223 and 224.
Fig. 3 b shows according to another example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic 200, target area is carried out to the key diagram of the illustrative case of image procossing.As shown in Figure 3 b, according to an example of the present invention, in step S105, second image 321,322 and 323 relevant to realtime graphic 200 can be obtained, and described second image 321,322 and 323 is filled in target area 311,312 and 313 to cover described destination object.
In the method for the image procossing provided at above-mentioned the present embodiment, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
Below, with reference to Fig. 4, image capture device is according to an embodiment of the invention described.Fig. 4 is the exemplary block diagram of the image capture device 400 illustrated according to the embodiment of the present invention.As shown in Figure 4, the image capture device 400 of the present embodiment comprises collecting unit 410, display unit 420, operating body recognition unit 430, object determining unit 440, area determination unit 450 and graphics processing unit 460.The modules of image capture device 400 performs each step/function of the method 100 of the matching unit in above-mentioned Fig. 1, therefore, succinct in order to describe, and no longer specifically describes.
Such as, collecting unit 410 can gather the first scene, and display unit 420 can show gathered realtime graphic.According to an example of the present invention, image capture device 400 also can comprise lens component.Collecting unit 410 and display unit 420 can correspondingly with lens component be arranged.In the case, when lens component is arranged in the viewing area of user, the first scene by collecting unit 410 pairs of user's transmitting lens chip part viewings gathers, and obtains realtime graphic.Such as, lens component can be arranged on above or below collecting unit.Therefore, when lens component is arranged in the viewing area of user, collecting unit 410 can carry out image acquisition along the direction similar with user's view direction, thus obtains the real-time image acquisition of user's transmitting lens chip part viewing.
Operating body recognition unit 430 can determine whether occurred operating body in realtime graphic.According to an example of the present invention, operating body can be the finger of user.Alternatively, operating body also can be the operating pen etc. pre-set.
When determining to have occurred operating body in realtime graphic, object determining unit 440 can using acquisition target corresponding with the position of operating body in realtime graphic as destination object.According to an example of the present invention, object determining unit 440 can comprise Object identifying module and object determination module.Acquisition target in Object identifying module identifiable design realtime graphic, and obtain each acquisition target primary importance in realtime graphic.Then object determination module is in identified acquisition target, determines that its primary importance first acquisition target corresponding with the position of operating body is as destination object.
Selectively, object determining unit 440 also can comprise distance determination module and object determination module.After determining the acquisition target that its primary importance is corresponding with the position of operating body, distance determination module can determine the first distance between object corresponding with the first acquisition target in the first scene and image capture device, and determines the second distance between object corresponding with the acquisition target except the first acquisition target in the first scene and image capture device.Then object determination module is in the object of the first scene, determines that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference.Last described object determination module also can using the acquisition target corresponding to target object as destination object.In example according to the present invention, the object in the first scene can be do not have lived object, such as buildings, trees etc.In addition, the object in the first scene also can be lived object, such as people, animal etc.
Area determination unit 450 can determine the target area of destination object in realtime graphic.Then graphics processing unit 460 can carry out image procossing to target area.According to an example of the present invention, graphics processing unit 460 can carry out virtualization process to target area, with virtualization destination object.Can know from experience ground, according to another example of the present invention, graphics processing unit can obtain second image relevant to realtime graphic, and by the second image completion to target area with coverage goal object.Then, the image handled by display unit displayable image processing unit 460.
In the image capture device that above-mentioned the present embodiment provides, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
In addition, as mentioned above, preferably, according to an example of the present invention, image capture device can be wear electronic equipment.Such as, image capture device is spectacle image capture device.Fig. 5 shows the key diagram that the image capture device 400 shown in Fig. 4 is an illustrative case of spectacle image capture device.For simplicity, no longer composition graphs 5 describes spectacle image capture device 500 part similar with image capture device 400.
As shown in Figure 5, image capture device 500 also can comprise picture frame module 510, lens component 520 and fixed cell.Lens component 520 is arranged in picture frame module 510.The fixed cell of image capture device 500 comprises the first sway brace 531 and the second sway brace 532.As shown in Figure 3, the first sway brace comprises the first pontes 531(as shown in the dash area in Fig. 5) and the first retaining part 532.The first pontes 531 connects picture frame module 510 and the first retaining part 532.Second sway brace comprises the second coupling part 541(as shown in the dash area in Fig. 5) and the second retaining part 542.Second coupling part 541 connects picture frame module 510 and the second retaining part 542.In addition, can the 3rd retaining part (not shown) be set in picture frame module 510.Particularly, the 3rd retaining part can be arranged on the position of picture frame module 510 between two eyeglasses.By the first retaining part, the second retaining part and the 3rd retaining part, wear-type electronic equipment is maintained at the head of user.Particularly, the first retaining part and the second retaining part can be used for the ear the first sway brace and the second sway brace being supported on user, and the 3rd retaining part can be used for bridge of the nose place picture frame module 510 being supported on user.
In the present embodiment, the collecting unit (not shown) of image capture device 500 can be arranged accordingly with lens component 520, with determine the image that collecting unit gathers and the scene that user sees basically identical.Such as, collecting unit can be arranged in the picture frame module 510 between two lens component.Alternatively, the collecting unit of image capture device 500 also can be arranged in picture frame module 510 with in lens component accordingly.In addition, the collecting unit of image capture device 500 also can comprise two acquisition modules, and be arranged on accordingly respectively in picture frame module 510 with two eyeglasses, collecting unit can process the image that two acquisition modules gather, with the image gathered in conjunction with two acquisition modules, make the scene that the image after processing truly is seen closer to user.
Fig. 6 shows the block diagram according to the display unit 600 in image capture device 500.As shown in Figure 6, display unit 600 can comprise the first display module 610, first optical system 620, first light guide member 630 and the second light guide member 640.Fig. 5 shows the key diagram of a signal situation of the display unit 600 shown in Fig. 6.
First display module 610 can be arranged in picture frame module 510, and is connected with first data transmission line.The first vision signal that first display module 610 can transmit according to the first data transmission line of image capture device 500 shows the first image.First data transmission line can be arranged in fixed cell and picture frame module.Display can be sent to display unit by first data transmission line.Display unit can show to user according to display.In addition, although be described for data line in the present embodiment, the present invention is not limited thereto, such as, according to another example of the present invention, also by wireless transmission method, display is sent to display unit.In addition, according to an example of the present invention, the first display module 610 can be the display module of the miniature display screen that size is less.
First optical system 620 also can be arranged in picture frame module 510.First optical system 620 can receive the light sent from the first display module, and carries out light path converting to the light sent from the first display module, to form the first amplification virtual image.That is, the first optical system 620 has positive refractive power.Thus user can know viewing first image, and the size of image that user watches is by the restriction of the size of display unit.
Such as, optical system can comprise with convex lens.Alternatively, in order to the interference reducing aberration, avoid dispersion etc. to cause imaging, bring user better visual experience, optical system also can by the multiple lens forming lens subassemblies comprising convex lens and concavees lens.In addition, according to an example of the present invention, the first display module 610 and the first optical system 620 can be set accordingly along the optical axis of 4 optical systems.Alternatively, according to another example of the present invention, display unit also can comprise the 5th light guide member, so that the light launched from the first display module 610 is sent to the first optical system 620.
As shown in Figure 7, the light sent from the first display module 610 is received in the first optical system 620, and after carrying out light path converting to the light sent from the first display module 610, the light through the first optical system can be sent to the second light guide member 640 by the first light guide member 630.Second light guide member 640 can be arranged in lens component 520.And the second light guide member can receive the light transmitted by the first light guide member 630, and the light transmitted by the first light guide member 630 reflects to the eyes of the user wearing wear-type electronic equipment.
Return Fig. 5, lens component 520 meets the first predetermined transmittance on the direction from inner side to outside, makes user can watch surrounding environment while the virtual image is amplified in viewing first.Such as, image generation unit is arranged according to described image, when generating the first image about described destination object, display unit shows the first image generated, make user while see the destination object in the first scene through eyeglass, see the first image be superimposed upon on destination object shown by display unit.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with electronic hardware, computer software or the combination of the two.And software module can be placed in the computer-readable storage medium of arbitrary form.In order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Those skilled in the art can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
It should be appreciated by those skilled in the art that and can be dependent on design requirement and other factors carries out various amendment, combination, incorporating aspects and replacement to the present invention, as long as they are in the scope of appended claims and equivalent thereof.

Claims (12)

CN201310244619.2A2013-06-192013-06-19The method and image capture device of image procossingActiveCN104239877B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201310244619.2ACN104239877B (en)2013-06-192013-06-19The method and image capture device of image procossing

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201310244619.2ACN104239877B (en)2013-06-192013-06-19The method and image capture device of image procossing

Publications (2)

Publication NumberPublication Date
CN104239877Atrue CN104239877A (en)2014-12-24
CN104239877B CN104239877B (en)2019-02-05

Family

ID=52227902

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201310244619.2AActiveCN104239877B (en)2013-06-192013-06-19The method and image capture device of image procossing

Country Status (1)

CountryLink
CN (1)CN104239877B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105898346A (en)*2016-04-212016-08-24联想(北京)有限公司Control method, electronic equipment and control system
CN106033622A (en)*2015-03-192016-10-19联想(北京)有限公司Data collection method and device and target object reconstructing method and device
CN111208906A (en)*2017-03-282020-05-29联想(北京)有限公司Method and display system for presenting image

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1784649A (en)*2003-04-082006-06-07智能技术公司 Touch system and method for automatic calibration
CN1904806A (en)*2006-07-282007-01-31上海大学System and method of contactless position input by hand and eye relation guiding
CN101510957A (en)*2008-02-152009-08-19索尼株式会社Image processing device, camera device, communication system, image processing method, and program
US20120092328A1 (en)*2010-10-152012-04-19Jason FlaksFusing virtual content into real content
CN102681651A (en)*2011-03-072012-09-19刘广松User interaction system and method
CN102701033A (en)*2012-05-082012-10-03华南理工大学Elevator key and method based on image recognition technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1784649A (en)*2003-04-082006-06-07智能技术公司 Touch system and method for automatic calibration
CN1904806A (en)*2006-07-282007-01-31上海大学System and method of contactless position input by hand and eye relation guiding
CN101510957A (en)*2008-02-152009-08-19索尼株式会社Image processing device, camera device, communication system, image processing method, and program
US20120092328A1 (en)*2010-10-152012-04-19Jason FlaksFusing virtual content into real content
CN102681651A (en)*2011-03-072012-09-19刘广松User interaction system and method
CN102701033A (en)*2012-05-082012-10-03华南理工大学Elevator key and method based on image recognition technology

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106033622A (en)*2015-03-192016-10-19联想(北京)有限公司Data collection method and device and target object reconstructing method and device
CN106033622B (en)*2015-03-192020-03-24联想(北京)有限公司Data acquisition and target object reconstruction method and device
CN105898346A (en)*2016-04-212016-08-24联想(北京)有限公司Control method, electronic equipment and control system
CN111208906A (en)*2017-03-282020-05-29联想(北京)有限公司Method and display system for presenting image
CN111208906B (en)*2017-03-282021-12-24联想(北京)有限公司Method and display system for presenting image

Also Published As

Publication numberPublication date
CN104239877B (en)2019-02-05

Similar Documents

PublicationPublication DateTitle
EP3011418B1 (en)Virtual object orientation and visualization
US9245389B2 (en)Information processing apparatus and recording medium
US10921881B2 (en)Position tracking system for head-mounted displays that includes sensor integrated circuits
EP2824541B1 (en)Method and apparatus for connecting devices using eye tracking
JP2015507860A (en) Guide to image capture
US20150015542A1 (en)Control Method And Electronic Device
KR20210052570A (en) Determination of separable distortion mismatch
CN105137601B (en)A kind of intelligent glasses
US11900058B2 (en)Ring motion capture and message composition system
CN107122039A (en)A kind of intelligent vision auxiliary strengthening system and its application method
CN107835404A (en)Method for displaying image, equipment and system based on wear-type virtual reality device
CN104239877A (en)Image processing method and image acquisition device
US20240397033A1 (en)Hyper-connected and synchronized ar glasses
US20160189341A1 (en)Systems and methods for magnifying the appearance of an image on a mobile device screen using eyewear
CN104077784A (en)Method for extracting target object and electronic device
CN111464781A (en)Image display method, image display device, storage medium, and electronic apparatus
CN109947240B (en)Display control method, terminal and computer readable storage medium
WO2021057420A1 (en)Method for displaying control interface and head-mounted display
CN104062758B (en)Image display method and display equipment
US9898661B2 (en)Electronic apparatus and method for storing data
CN103677704B (en)Display device and display methods
US12363448B2 (en)Ambient light detection and hyperlapse video generation
EP4581833A1 (en)Transferring a visual representation of speech between devices
CN103970782A (en)Electronic equipment and data storage method

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp