Movatterモバイル変換


[0]ホーム

URL:


CN107491459A - The search method and device of three-dimensional image - Google Patents

The search method and device of three-dimensional image
Download PDF

Info

Publication number
CN107491459A
CN107491459ACN201610414781.8ACN201610414781ACN107491459ACN 107491459 ACN107491459 ACN 107491459ACN 201610414781 ACN201610414781 ACN 201610414781ACN 107491459 ACN107491459 ACN 107491459A
Authority
CN
China
Prior art keywords
dimensional image
image
information
convolutional neural
colouring information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610414781.8A
Other languages
Chinese (zh)
Inventor
孙修宇
李�昊
华先胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding LtdfiledCriticalAlibaba Group Holding Ltd
Priority to CN201610414781.8ApriorityCriticalpatent/CN107491459A/en
Publication of CN107491459ApublicationCriticalpatent/CN107491459A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

This application discloses a kind of search method of three-dimensional image and device, wherein, the search method of three-dimensional image includes:Determine the colouring information and depth information of three-dimensional image to be retrieved;The colouring information of three-dimensional image and depth information are inputted to the convolutional neural networks model of training in advance, wherein, convolutional neural networks model is established according to the colouring information and depth information of three-dimensional image sample;The characteristics of image of three-dimensional image is exported by convolutional neural networks model;Retrieval result is obtained according to characteristics of image.The search method and device of the three-dimensional image of the embodiment of the present application, the degree of accuracy for obtaining retrieval result corresponding to three-dimensional image can be effectively improved.

Description

The search method and device of three-dimensional image
Technical field
The application is related to field of computer technology, more particularly to the search method and device of a kind of three-dimensional image.
Background technology
With the high speed development of internet, increasing user begins to use in a manner of scheming to search figure to obtain oneself instituteThe information needed.At present, image indexing system primarily directed to two dimensional image extract Expressive Features (such as CNN features, SIFT feature,Color histogram feature, two dimensional image Expressive Features etc.), the higher figure of similarity is obtained to match by foregoing description featurePicture.But for the retrieval of 3-D view, the difference of same object iamge description feature corresponding under different angle is veryGreatly, if continuing to continue to use traditional method, the retrieval result that may result in acquisition is not accurate enough.
Apply for content
The application is intended to one of technical problem at least solving in correlation technique to a certain extent.Therefore, the applicationOne purpose is to propose a kind of search method of three-dimensional image, can effectively improve acquisition three-dimensional image and correspond toRetrieval result the degree of accuracy, so as to lift user experience.
Second purpose of the application is the retrieval device for proposing a kind of three-dimensional image.
To achieve these goals, the application first aspect embodiment proposes a kind of retrieval side of three-dimensional imageMethod, including:Determine the colouring information and depth information of three-dimensional image to be retrieved;By the color of the three-dimensional imageInformation and depth information are inputted to the convolutional neural networks model of training in advance, wherein, the convolutional neural networks model is rootEstablished according to the colouring information and depth information of three-dimensional image sample;By described in convolutional neural networks model outputThe characteristics of image of three-dimensional image;Retrieval result is obtained according to described image feature.
The search method of the three-dimensional image of the embodiment of the present application, by the face for determining three-dimensional image to be retrievedColor information and depth information, the colouring information of the three-dimensional image and depth information are inputted to the convolution god of training in advanceThrough network model, and pass through the characteristics of image of the convolutional neural networks model output three-dimensional image, final basisCharacteristics of image obtains retrieval result, can effectively improve the degree of accuracy for obtaining retrieval result corresponding to three-dimensional image, fromAnd lift user experience.
The application second aspect embodiment proposes a kind of retrieval device of three-dimensional image, including:Determining module, useIn it is determined that the colouring information and depth information of three-dimensional image to be retrieved;Input module, for by the 3 dimensional drawingThe colouring information and depth information of picture are inputted to the convolutional neural networks model of training in advance, wherein, the convolutional neural networksModel is established according to the colouring information and depth information of three-dimensional image sample;Output module, for passing through the volumeProduct neural network model exports the characteristics of image of the three-dimensional image;Acquisition module, for being obtained according to described image featureTake retrieval result.
The retrieval device of the three-dimensional image of the embodiment of the present application, by the face for determining three-dimensional image to be retrievedColor information and depth information, the colouring information of the three-dimensional image and depth information are inputted to the convolution god of training in advanceThrough network model, and pass through the characteristics of image of the convolutional neural networks model output three-dimensional image, final basisCharacteristics of image obtains retrieval result, can effectively improve the degree of accuracy for obtaining retrieval result corresponding to three-dimensional image, fromAnd lift user experience.
Brief description of the drawings
Fig. 1 is the first pass figure according to the search method of the three-dimensional image of the application one embodiment;
Fig. 2 is the second flow chart according to the search method of the three-dimensional image of the application one embodiment;
Fig. 3 is the flow chart for establishing convolutional neural networks model according to the application one embodiment;
Fig. 4 is the first structure schematic diagram according to the retrieval device of the three-dimensional image of the application one embodiment;
Fig. 5 is the second structural representation according to the retrieval device of the three-dimensional image of the application one embodiment;
Fig. 6 is the 3rd structural representation according to the retrieval device of the three-dimensional image of the application one embodiment.
Embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to endSame or similar label represents same or similar element or the element with same or like function.Below with reference to attachedThe embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the search method and device of the three-dimensional image of the embodiment of the present application are described.
Fig. 1 is the first pass figure according to the search method of the three-dimensional image of the application one embodiment.
As shown in figure 1, the search method of three-dimensional image may include:
S1, the colouring information and depth information for determining three-dimensional image to be retrieved.
Specifically, the three-dimensional image of user's input can first be received.Wherein, three-dimensional image can be by such as3D camera grabs as Kinect obtain.Then, the colouring information and depth information of three-dimensional image can be obtained.
Three-dimensional image is specifically to be described by colouring information and depth information.Wherein, colouring information can be RGBColor mode or YUV color modes.In the present embodiment, rgb color pattern is mainly taken to illustrate.In rgb colorIn pattern, it may include describe the channel B of the R passages of red, the G passages of description green and description blueness.The value of each passageScope is between 0 to 255, that is to say, that and 256 grades of rgb color can be combined into about 16,780,000 kinds of colors altogether, i.e., 256 ×256 × 256=16777216.Therefore, the color of certain point in image can be described by the numerical value of three above passage.
Depth information is the information of the distance of each point and lens plane described in three-dimensional image.
S2, the colouring information of three-dimensional image and depth information inputted to the convolutional neural networks mould of training in advanceType.
Wherein, convolutional neural networks model is established according to the colouring information and depth information of three-dimensional image sample's.
S3, the characteristics of image by convolutional neural networks model output three-dimensional image.
S4, according to characteristics of image obtain retrieval result.
Specifically, the distance between data characteristics of candidate image in characteristics of image and database can be calculated.Then may be usedCandidate image is ranked up according to distance order from small to large, finally can using the candidate image sorted positioned at preceding N names asRetrieval result.Wherein, database is the database for preserving three-dimensional image pre-established.Wherein, distance can beEuclidean distance or COS distance.
It should be appreciated that the similarity between the nearlyer expression image of distance is higher, thus, candidate image can be arrangedSequence, so as to obtain more accurately retrieval result.
In addition, as shown in Fig. 2 embodiments herein may also include step S5.
S5, before colouring information and depth information to be inputted to the convolutional neural networks model to training in advance, to three-dimensionalThe colouring information and depth information of stereo-picture are normalized.
First, first the colouring information of three-dimensional image can be normalized.
Specifically, can obtain in three-dimensional image R passage numerical value j, G passage numerical value k and channel B numerical value l a little,Then respectively by R passage numerical value j, G passage numerical value k, channel B numerical value l divided by 255, so as to obtain the R passage numerical value after normalizationJ ', G passage numerical value k ' and channel B numerical value l '.Because j, k, l span are between 0 to 255, therefore, corresponding j ', k 'And l ' span is between 0 to 1.
Then, then to the depth information of three-dimensional image it is normalized.
Specifically, can obtain in three-dimensional image depth value h, minimum-depth numerical value a a little (put down apart from camera lensThe nearest value in face) and depth capacity numerical value b (value farthest apart from lens plane).Wherein, a≤h≤b.Then, depth can be obtainedThe first difference between numerical value h and minimum-depth numerical value a, then obtain it is second poor between depth capacity numerical value b and depth value hValue, finally by the first difference divided by the second difference, the final depth value obtained after normalization.Depth value after normalizationBetween span is 0 to 1.
Colouring information and depth information after normalized be typically represented by the image such as 256 of a fixed dimension ×The two dimensional image of 256 pixels.
The process for establishing convolutional neural networks model is described in detail below.
Specifically, as shown in Figure 3, it may include following steps:
S31, the colouring information and depth information for extracting three-dimensional image sample.
S32, colouring information and depth information to three-dimensional image sample are normalized, with corresponding to generationNormalized image sample.
First, first the colouring information of three-dimensional image sample can be normalized.
Specifically, can obtain in three-dimensional image sample R passages numerical value, G passages numerical value and channel B number a littleValue, then respectively by R passages numerical value, G passages numerical value, channel B numerical value divided by 255, so as to obtain the R port numbers after normalizationValue, G passages numerical value and channel B numerical value.Between the span of each passage numerical value after normalization is 0 to 1.
Then, the depth information of three-dimensional image sample can be normalized.
Specifically, can obtain in three-dimensional image sample depth value a little, minimum-depth numerical value (apart from camera lensThe nearest value of plane) and depth capacity numerical value (value farthest apart from lens plane).Then, depth value can be obtained and minimum is deepThe first difference between number of degrees value, then the second difference between depth capacity numerical value and depth value is obtained, it is finally poor by firstValue divided by the second difference, the final depth value obtained after normalization.The span of depth value after normalization is 0 to 1Between.
After this, normalized image sample can be generated according to the colouring information and depth information after normalization.For convenienceCalculate, normalized image sample can typically may be scaled to a fixed dimension such as 256 × 256 pixels.
S33, normalized image sample is trained, to establish convolutional neural networks model.
Specifically, the parameter of convolutional neural networks model can be trained based on multi-task learning method, to improve volumeThe identification precision of product neural network model.Wherein, task can be to the classification task of image pattern, can also be to imageSorting task of sample etc..
The search method of the three-dimensional image of the embodiment of the present application, by the face for determining three-dimensional image to be retrievedColor information and depth information, the colouring information of three-dimensional image and depth information are inputted to the convolutional Neural net of training in advanceNetwork model, and by the characteristics of image of convolutional neural networks model output three-dimensional image, finally obtained according to characteristics of imageRetrieval result, the degree of accuracy for obtaining retrieval result corresponding to three-dimensional image can be effectively improved, is made so as to lift userWith experience.
To achieve the above object, the application also proposes a kind of retrieval device of three-dimensional image.
Fig. 4 is the first structure schematic diagram according to the retrieval device of the three-dimensional image of the application one embodiment.
As shown in figure 4, the retrieval device of three-dimensional image may include:Determining module 110, input module 120, output mouldBlock 130 and acquisition module 140.
Determining module 110 is used for the colouring information and depth information for determining three-dimensional image to be retrieved.Specifically, may be usedFirst receive the three-dimensional image of user's input.Wherein, three-dimensional image can pass through the 3D video cameras as KinectCatch and obtain.Then, the colouring information and depth information of three-dimensional image can be obtained.
Three-dimensional image is specifically to be described by colouring information and depth information.Wherein, colouring information can be RGBColor mode or YUV color modes.In the present embodiment, rgb color pattern is mainly taken to illustrate.In rgb colorIn pattern, it may include describe the channel B of the R passages of red, the G passages of description green and description blueness.The value of each passageScope is between 0 to 255, that is to say, that and 256 grades of rgb color can be combined into about 16,780,000 kinds of colors altogether, i.e., 256 ×256 × 256=16777216.Therefore, the color of certain point in image can be described by the numerical value of three above passage.
Depth information is the information of the distance of each point and lens plane described in three-dimensional image.
Input module 120 is used to input the colouring information of three-dimensional image and depth information to the convolution of training in advanceNeural network model.Wherein, convolutional neural networks model is the colouring information and depth information according to three-dimensional image sampleEstablish.
Output module 130 is used for the characteristics of image that three-dimensional image is exported by convolutional neural networks model.
Acquisition module 140 is used to obtain retrieval result according to characteristics of image.Specifically, characteristics of image and database can be calculatedIn the distance between the data characteristics of candidate image.Then candidate image can be arranged according to the order of distance from small to largeSequence, the candidate image that finally can be located at preceding N names using sorting is as retrieval result.Wherein, database is to pre-establish for protectingDeposit the database of three-dimensional image.Wherein, distance can be Euclidean distance or COS distance.
It should be appreciated that the similarity between the nearlyer expression image of distance is higher, thus, candidate image can be arrangedSequence, so as to obtain more accurately retrieval result.
In addition, as shown in figure 5, the retrieval device of three-dimensional image may also include normalization module 150.
Normalization module 150 is used to input to the convolutional neural networks mould of training in advance by colouring information and depth informationBefore type, the colouring information and depth information of three-dimensional image are normalized.
First, first the colouring information of three-dimensional image can be normalized.
Specifically, can obtain in three-dimensional image R passage numerical value j, G passage numerical value k and channel B numerical value l a little,Then respectively by R passage numerical value j, G passage numerical value k, channel B numerical value l divided by 255, so as to obtain the R passage numerical value after normalizationJ ', G passage numerical value k ' and channel B numerical value l '.Because j, k, l span are between 0 to 255, therefore, corresponding j ', k 'And l ' span is between 0 to 1.
Then, then to the depth information of three-dimensional image it is normalized.
Specifically, can obtain in three-dimensional image depth value h, minimum-depth numerical value a a little (put down apart from camera lensThe nearest value in face) and depth capacity numerical value b (value farthest apart from lens plane).Wherein, a≤h≤b.Then, depth can be obtainedThe first difference between numerical value h and minimum-depth numerical value a, then obtain it is second poor between depth capacity numerical value b and depth value hValue, finally by the first difference divided by the second difference, the final depth value obtained after normalization.Depth value after normalizationBetween span is 0 to 1.
Colouring information and depth information after normalized be typically represented by the image such as 256 of a fixed dimension ×The two dimensional image of 256 pixels.
In addition, as shown in fig. 6, the retrieval device of three-dimensional image may also include extraction module 160, generation module 170With establish module 180.
Extraction module 160 is used for the colouring information and depth information for extracting three-dimensional image sample.
Generation module 170 is used to the colouring information and depth information of three-dimensional image sample be normalized,With normalized image sample corresponding to generation.
First, first the colouring information of three-dimensional image sample can be normalized.
Specifically, can obtain in three-dimensional image sample R passages numerical value, G passages numerical value and channel B number a littleValue, then respectively by R passages numerical value, G passages numerical value, channel B numerical value divided by 255, so as to obtain the R port numbers after normalizationValue, G passages numerical value and channel B numerical value.Between the span of each passage numerical value after normalization is 0 to 1.
Then, the depth information of three-dimensional image sample can be normalized.
Specifically, can obtain in three-dimensional image sample depth value a little, minimum-depth numerical value (apart from camera lensThe nearest value of plane) and depth capacity numerical value (value farthest apart from lens plane).Then, depth value can be obtained and minimum is deepThe first difference between number of degrees value, then the second difference between depth capacity numerical value and depth value is obtained, it is finally poor by firstValue divided by the second difference, the final depth value obtained after normalization.The span of depth value after normalization is 0 to 1Between.
After this, normalized image sample can be generated according to the colouring information and depth information after normalization.For convenienceCalculate, normalized image sample can typically may be scaled to a fixed dimension such as 256 × 256 pixels.
Establish module 180 to be used to be trained normalized image sample, to establish convolutional neural networks model.SpecificallyGround, the parameter of convolutional neural networks model can be trained based on multi-task learning method, to improve convolutional neural networks mouldThe identification precision of type.Wherein, task can be to the classification task of image pattern, can also be that sequence to image pattern is appointedBusiness etc..
The retrieval device of the three-dimensional image of the embodiment of the present application, by the face for determining three-dimensional image to be retrievedColor information and depth information, the colouring information of three-dimensional image and depth information are inputted to the convolutional Neural net of training in advanceNetwork model, and by the characteristics of image of convolutional neural networks model output three-dimensional image, finally obtained according to characteristics of imageRetrieval result, the degree of accuracy for obtaining retrieval result corresponding to three-dimensional image can be effectively improved, is made so as to lift userWith experience.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically showThe description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example descriptionPoint is contained at least one embodiment or example of the application.In this manual, to the schematic representation of above-mentioned term notIdentical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with officeCombined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this areaArt personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specificationClose and combine.
Although embodiments herein has been shown and described above, it is to be understood that above-described embodiment is exampleProperty, it is impossible to the limitation to the application is interpreted as, one of ordinary skill in the art within the scope of application can be to above-mentionedEmbodiment is changed, changed, replacing and modification.

Claims (10)

CN201610414781.8A2016-06-132016-06-13The search method and device of three-dimensional imagePendingCN107491459A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610414781.8ACN107491459A (en)2016-06-132016-06-13The search method and device of three-dimensional image

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610414781.8ACN107491459A (en)2016-06-132016-06-13The search method and device of three-dimensional image

Publications (1)

Publication NumberPublication Date
CN107491459Atrue CN107491459A (en)2017-12-19

Family

ID=60642288

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610414781.8APendingCN107491459A (en)2016-06-132016-06-13The search method and device of three-dimensional image

Country Status (1)

CountryLink
CN (1)CN107491459A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109410318A (en)*2018-09-302019-03-01先临三维科技股份有限公司Threedimensional model generation method, device, equipment and storage medium
CN109857895A (en)*2019-01-252019-06-07清华大学Stereoscopic vision search method and system based on polycyclic road view convolutional neural networks
CN111105343A (en)*2018-10-262020-05-05Oppo广东移动通信有限公司Method and device for generating three-dimensional model of object
CN114641795A (en)*2019-12-242022-06-17株式会社日立制作所Object search device and object search method

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050240885A1 (en)*2004-04-212005-10-27Nec Laboratories America, Inc.Efficient SAT-based unbounded symbolic model checking
CN104572965A (en)*2014-12-312015-04-29南京理工大学Search-by-image system based on convolutional neural network
CN104778441A (en)*2015-01-072015-07-15深圳市唯特视科技有限公司Multi-mode face identification device and method fusing grey information and depth information
CN105224942A (en)*2015-07-092016-01-06华南农业大学A kind of RGB-D image classification method and system
CN105354228A (en)*2015-09-302016-02-24小米科技有限责任公司Similar image searching method and apparatus
CN105512674A (en)*2015-11-252016-04-20中国科学院自动化研究所RGB-D object identification method and apparatus based on dense matching sub adaptive similarity measure
CN105654103A (en)*2014-11-122016-06-08联想(北京)有限公司Image identification method and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050240885A1 (en)*2004-04-212005-10-27Nec Laboratories America, Inc.Efficient SAT-based unbounded symbolic model checking
CN105654103A (en)*2014-11-122016-06-08联想(北京)有限公司Image identification method and electronic equipment
CN104572965A (en)*2014-12-312015-04-29南京理工大学Search-by-image system based on convolutional neural network
CN104778441A (en)*2015-01-072015-07-15深圳市唯特视科技有限公司Multi-mode face identification device and method fusing grey information and depth information
CN105224942A (en)*2015-07-092016-01-06华南农业大学A kind of RGB-D image classification method and system
CN105354228A (en)*2015-09-302016-02-24小米科技有限责任公司Similar image searching method and apparatus
CN105512674A (en)*2015-11-252016-04-20中国科学院自动化研究所RGB-D object identification method and apparatus based on dense matching sub adaptive similarity measure

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109410318A (en)*2018-09-302019-03-01先临三维科技股份有限公司Threedimensional model generation method, device, equipment and storage medium
US11978157B2 (en)2018-09-302024-05-07Shining 3D Tech Co., Ltd.Method and apparatus for generating three-dimensional model, device, and storage medium
CN111105343A (en)*2018-10-262020-05-05Oppo广东移动通信有限公司Method and device for generating three-dimensional model of object
CN111105343B (en)*2018-10-262023-06-09Oppo广东移动通信有限公司 Method and device for generating three-dimensional model of object
CN109857895A (en)*2019-01-252019-06-07清华大学Stereoscopic vision search method and system based on polycyclic road view convolutional neural networks
CN109857895B (en)*2019-01-252020-10-13清华大学Stereo vision retrieval method and system based on multi-loop view convolutional neural network
CN114641795A (en)*2019-12-242022-06-17株式会社日立制作所Object search device and object search method

Similar Documents

PublicationPublication DateTitle
CN107451607B (en)A kind of personal identification method of the typical character based on deep learning
CN108629319B (en)Image detection method and system
CN100578508C (en) Interactive image search system and method
CN109447169A (en)The training method of image processing method and its model, device and electronic system
CN104134071B (en)A kind of deformable part model object detecting method based on color description
CN102542275B (en)Automatic identification method for identification photo background and system thereof
Waldamichael et al.Coffee disease detection using a robust HSV color‐based segmentation and transfer learning for use on smartphones
CN110490238A (en)A kind of image processing method, device and storage medium
CN109190643A (en)Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment
CN106897681A (en)A kind of remote sensing images comparative analysis method and system
CN107491459A (en)The search method and device of three-dimensional image
Xu et al.Recognition of weeds in wheat fields based on the fusion of RGB images and depth images
CN107330360A (en)A kind of pedestrian's clothing colour recognition, pedestrian retrieval method and device
CN110827312A (en)Learning method based on cooperative visual attention neural network
CN107220664A (en)A kind of oil bottle vanning counting method based on structuring random forest
Wong et al.Computer vision algorithm development for classification of palm fruit ripeness
CN113837174A (en) Target object recognition method, device and computer equipment
CN110032654A (en)A kind of supermarket's commodity input method and system based on artificial intelligence
CN107992783A (en)Face image processing process and device
CN111080748B (en)Automatic picture synthesizing system based on Internet
CN110083724A (en)A kind of method for retrieving similar images, apparatus and system
CN108634241A (en)Select halogen method and its brine-adding system
Sun et al.An infrared-optical image registration method for industrial blower monitoring based on contour-shape descriptors
CN105404682B (en)A kind of book retrieval method based on digital image content
CN113269195A (en)Reading table image character recognition method and device and readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:1248335

Country of ref document:HK

RJ01Rejection of invention patent application after publication

Application publication date:20171219

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp