Movatterモバイル変換


[0]ホーム

URL:


CN112529770A - Image processing method, image processing device, electronic equipment and readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and readable storage medium
Download PDF

Info

Publication number
CN112529770A
CN112529770ACN202011414651.7ACN202011414651ACN112529770ACN 112529770 ACN112529770 ACN 112529770ACN 202011414651 ACN202011414651 ACN 202011414651ACN 112529770 ACN112529770 ACN 112529770A
Authority
CN
China
Prior art keywords
input
dimensional
image
dimensional model
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011414651.7A
Other languages
Chinese (zh)
Other versions
CN112529770B (en
Inventor
秦美洋
朱丽君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co LtdfiledCriticalVivo Mobile Communication Co Ltd
Priority to CN202011414651.7ApriorityCriticalpatent/CN112529770B/en
Publication of CN112529770ApublicationCriticalpatent/CN112529770A/en
Application grantedgrantedCritical
Publication of CN112529770BpublicationCriticalpatent/CN112529770B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请实施例提供了一种图像处理方法、装置、电子设备和可读存储介质,属于图像处理技术领域。其中,图像处理方法包括:获取目标图像的第一景深信息;根据第一景深信息在电子设备所处的空间投射目标图像的三维模型;接收对三维模型目标位置的第一输入;响应于第一输入,根据第一输入的输入参数,调整三维模型目标位置的三维尺寸。从而能够通过图像的景深信息精确识别出图像中物体的尺寸,并在在空间投射出立体的三维模型,用户能够通过对三维模型目标位置的操作,对三维模型的三维尺寸的值进行修改,实现了三维立体编辑处理,使得图像中的人或物更加生动立体,得到精致完美的图像,有效提升用户对处理后的图像的满意度。

Figure 202011414651

Embodiments of the present application provide an image processing method, apparatus, electronic device, and readable storage medium, which belong to the technical field of image processing. The image processing method includes: acquiring first depth information of a target image; projecting a three-dimensional model of the target image in a space where the electronic device is located according to the first depth information; receiving a first input of a target position of the three-dimensional model; Input, adjust the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input. Therefore, the size of the object in the image can be accurately identified through the depth of field information of the image, and a three-dimensional three-dimensional model can be projected in space. The user can modify the value of the three-dimensional size of the three-dimensional model by operating the target position of the three-dimensional model to achieve 3D stereo editing processing is adopted, which makes the people or objects in the image more vivid and stereoscopic, and obtains a delicate and perfect image, which effectively improves the user's satisfaction with the processed image.

Figure 202011414651

Description

Image processing method, image processing device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a readable storage medium.
Background
In the related art, the method for modifying the depth image can only be operated on a plane, the modification cannot be performed according to the distance of an object and the size of a person, the problem that the modified depth image has inconsistent size or insufficient refinement exists, and the editing of the depth image on the plane cannot realize the image editing operation, so that the overall stereoscopic impression and refinement of the picture are not facilitated.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a readable storage medium, which can accurately identify depth of field information in an image, project a three-dimensional model in space, change two-dimensional edition into three-dimensional edition through the three-dimensional model, and enable people or static objects in the image to be more three-dimensionally refined.
In order to solve the above problems, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring first depth-of-field information of a target image;
projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth-of-field information;
receiving a first input of a three-dimensional model target position;
and responding to the first input, and adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring first depth-of-field information of the target image;
the projection module is used for projecting a three-dimensional model of the target image in the space where the electronic equipment is located according to the first depth of field information;
the receiving module is used for receiving a first input of a target position of the three-dimensional model;
and the processing module is used for responding to the first input and adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executed on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method as provided in the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method as provided in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the image processing method as provided in the first aspect.
In the embodiment of the application, first depth information of a target image is acquired; projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth-of-field information; receiving a first input of a three-dimensional model target position; and responding to the first input, and adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input. Therefore, the size of an object in the image can be accurately identified through the depth of field information of the image, the three-dimensional model is projected in the space, a user can modify the three-dimensional size of the three-dimensional model through the operation on the target position of the three-dimensional model, the three-dimensional editing processing is realized, people or objects in the image are more vivid and stereoscopic, the delicate and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
Drawings
FIG. 1 shows one of the flow diagrams of an image processing method according to one embodiment of the present application;
FIG. 2 shows a second flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 shows a third flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 shows a fourth flowchart of an image processing method according to an embodiment of the present application;
FIG. 5 shows a fifth flowchart of an image processing method according to an embodiment of the present application;
FIG. 6 shows a sixth flowchart of an image processing method according to an embodiment of the present application;
FIG. 7 shows a seventh flowchart of an image processing method according to an embodiment of the present application;
FIG. 8 shows an eighth flowchart of an image processing method according to an embodiment of the present application;
FIG. 9 shows a Gaussian curve diagram of a depth image in accordance with an embodiment of the present application;
fig. 10 shows one of the configuration block diagrams of an image processing apparatus according to an embodiment of the present application;
fig. 11 shows a second configuration block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 shows a third block diagram of the configuration of an image processing apparatus according to an embodiment of the present application;
FIG. 13 shows a block diagram of an electronic device according to an embodiment of the present application;
fig. 14 shows a hardware configuration block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
An image processing method, an image processing apparatus, an electronic device, and a readable storage medium according to some embodiments of the present application are described below with reference to fig. 1 to 14.
In an embodiment of the present application, fig. 1 shows one of flowcharts of an image processing method of the embodiment of the present application, including:
102, acquiring first depth of field information of a target image;
for example, the mobile phone enters an album editing interface, and the user clicks an image selection target image in the album and calls first depth-of-field information recorded when the image is shot.
104, projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth of field information;
in this embodiment, the first depth of field information of each pixel point of the target image is read, and the three-dimensional size of an object or a person in the target image, that is, the coordinates of the pixel points (X, Y, Z axis coordinates), is obtained according to the first depth of field information, and a three-dimensional model corresponding to the target image is projected in a space where the electronic device is located by using a plurality of projection devices in different directions according to the coordinates of the pixel points, so that a user can view the three-dimensional form of the object or the person in the target image through the three-dimensional model, and the user can select a target position to be edited and modified conveniently.
It is understood that after the three-dimensional model is projected in the space, the three-dimensional model can be synchronously displayed on the screen of the electronic device, as shown in fig. 9, and the depth information of the target image is reflected by a gaussian curve.
Step 106, receiving a first input of a target position of the three-dimensional model;
the image processing method is suitable for electronic equipment, and the electronic equipment comprises but is not limited to a mobile terminal, a tablet computer, a notebook computer, a wearable device, a vehicle-mounted terminal and the like. The first input may be an operation of the electronic device by the user, or an operation of the three-dimensional model recognized by the electronic device by the user. Wherein the first input includes, but is not limited to, a click input, a key input, a fingerprint input, a swipe input, and a press input. The key input includes, but is not limited to, a power key, a volume key, a single-click input of a main menu key, a double-click input, a long-press input, a combination key input, etc. to the electronic device. The operation mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
It should be noted that a photosensitive element array composed of a plurality of photosensitive elements is arranged in a space where the electronic device is located, luminance data of different positions of the three-dimensional model can be collected through the photosensitive element array, when the user operates to shield projection light beams of the three-dimensional model, the luminance data can be collected through the plurality of photosensitive elements to determine the position of the user operation, and then the target position of the three-dimensional model is identified.
For example, a finger of a user is placed at a position of the three-dimensional model to be edited, and the projection position of the finger, namely a part of pixel points in the three-dimensional model, is sensed according to the photosensitive element, so that a certain region in the image can be locally modified, three-dimensional stereo editing processing is realized, the user can edit any position in the image, the local image repairing requirement of the user is met, and the image repairing accuracy is greatly improved.
Andstep 108, responding to the first input, and adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input.
In this embodiment, the three-dimensional size of the target position of the three-dimensional model is replaced or modified according to the input parameters of the first input of the target position of the three-dimensional model, namely, the correction value of the image by the user, and the modified image is stored. Therefore, the two-dimensional editing of the image is changed into the three-dimensional editing, so that the electronic equipment can carry out the three-dimensional editing operation, people or objects in the image are more vivid and stereoscopic, a more exquisite and perfect image is obtained, and the satisfaction degree of a user on the processed image is effectively improved.
It is worth mentioning that after the three-dimensional size of the target position of the three-dimensional model is adjusted according to the input parameters, the projected three-dimensional model is changed accordingly to obtain the modified three-dimensional model, so that the user can check the modification effect of the target image in time.
In an embodiment of the present application, fig. 2 shows a second flowchart of an image processing method according to an embodiment of the present application, andstep 108, adjusting a three-dimensional size of a target position of a three-dimensional model according to a first input parameter, includes:
step 202, identifying a motion starting point and a motion ending point of a first input;
step 204, determining the displacement between the motion starting point and the motion end point;
in this embodiment, the first input may be a slide input to the three-dimensional model, a motion start point and a motion end point of the slide input are identified, and a displacement between the motion start point and the motion end point is calculated. Wherein the displacement comprises a direction and a distance.
Step 206, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation under the condition that the displacement belongs to the preset displacement interval;
and step 208, adjusting the three-dimensional size according to the size change amount.
In this embodiment, the corresponding relationship between the preset displacement interval and the size variation is preset, that is, different displacement intervals indicate different size variations correspondingly. And comparing the displacement between the motion starting point and the motion ending point with a preset displacement interval, and taking the size variation corresponding to the preset displacement interval as a target image correction value specified by the first input under the condition that the displacement belongs to the preset displacement interval. Therefore, the three-dimensional size can be modified in real time according to the size variation, a user can dynamically adjust the three-dimensional size through the sliding operation of the three-dimensional model, the image is zoomed in three dimensions, the change of the three-dimensional model is sensed when the user edits the image, the influence on the attractiveness of the image due to excessive modification of the user is avoided, the image modifying difficulty is reduced, and the integral or local three-dimensional sense and the exquisite sense of the image are effectively improved.
For example, in the case of a portrait image, the user needs to process a nose, a flat head, and fingers placed on the nose or hair of the three-dimensional model to determine the target position. The nose can be pulled up by stretching operation, the hair with collapsed top can be pulled up, or the lateral wing of the nose can be contracted by shortening operation, etc. During the stretching/shortening operation, the target position on the image changes with the change in the slide input.
In an embodiment of the present application, fig. 3 shows a third flowchart of an image processing method according to an embodiment of the present application, and step 202, identifying a motion start point and a motion end point of a first input includes:
step 302, capturing a projection of a first input on a three-dimensional model;
step 304, generating a motion track of a first input according to the projection;
and step 306, determining a motion starting point and a motion ending point according to the motion track.
In this embodiment, the projection pixel point position of the first input of the target position by the user on the three-dimensional model is captured by the photosensitive element, and the plurality of projection pixel point positions are connected to generate the motion trail of the first input. The motion starting point and the motion end point of the first input can be identified according to the motion track so as to determine the displacement between the two points, thereby accurately identifying the size variation required by a user, further modifying the three-dimensional size of the three-dimensional model through the size variation, realizing three-dimensional editing processing, and being beneficial to improving the overall or local stereoscopic impression and delicate feeling of the image.
In one embodiment of the present application, fig. 4 shows a fourth flowchart of an image processing method of an embodiment of the present application,step 302, capturing a projection of a first input on a three-dimensional model, comprising:
step 402, collecting brightness data of the three-dimensional model;
and step 404, determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
In this embodiment, the photosensitive element array acquires brightness data of different positions of the three-dimensional model, when the brightness data is less than or equal to a preset threshold, it is indicated that the position is blocked, and may be a sliding operation position of a user, and a position corresponding to the brightness data less than or equal to the preset threshold is recorded as a projection position, so that a motion trajectory input by the user can be determined through a set of projections, so that a size variation required by the user can be accurately identified, and further, a three-dimensional size of the three-dimensional model can be modified through the size variation, and three-dimensional editing processing is realized.
In an embodiment of the present application, fig. 5 shows a fifth flowchart of an image processing method according to an embodiment of the present application, and step 108, adjusting a three-dimensional size of a target position of a three-dimensional model according to a first input parameter, includes:
step 502, displaying the numerical value of the three-dimensional size and the size threshold;
in this embodiment, after the three-dimensional size of the three-dimensional model is identified, the numerical value of the three-dimensional size and the corresponding size threshold are displayed on the electronic device so that the user knows the current size parameters and the modifiable size range of the object or person in the target image. Therefore, users can reasonably repair the image according to the numerical value of the three-dimensional size and the size threshold, the image repairing quality is improved, and the image repairing difficulty is reduced.
It should be noted that the three-dimensional coordinate threshold may be a maximum value and a minimum value of the three-dimensional size of the three-dimensional model, or may be an equal-proportion adjustable range in the image that is reasonably set according to the requirement. Taking the figure retouching as an example, for the requirement of face slimming, the size threshold is the sum of the preset value and the preset value of the three-dimensional size of the pixel point. Therefore, the reasonable picture repairing range of the user can be prompted by displaying the size threshold, the influence on the attractive appearance of the picture caused by excessive modification of the user is avoided, and the picture repairing difficulty is favorably reduced.
And step 504, adjusting the three-dimensional size according to the target three-dimensional size value corresponding to the first input.
Wherein the first input is for inputting a target three-dimensional dimension value.
In this embodiment, the first input may be a key input to the electronic device, and the specific value of the target three-dimensional size value input by the user, that is, the distance information of the axes of the three-dimensional model coordinate system X, Y, Z, is obtained through the first input. Therefore, the value of the three-dimensional size of the target position of the three-dimensional model can be replaced according to the target three-dimensional size value, the three-dimensional editing function of the target image by the electronic equipment is realized, and the improvement of the whole or local three-dimensional effect and the delicate feeling of the target image is facilitated.
In an embodiment of the present application, fig. 6 shows a sixth flowchart of an image processing method according to an embodiment of the present application, including:
step 602, receiving a second input to the three-dimensional model;
in this embodiment, the second input may be an operation of the electronic device by the user, or an operation of the three-dimensional model of the solid by the user recognized by the electronic device. Wherein the second input includes, but is not limited to, a click input, a key input, a fingerprint input, a swipe input, a press input. The key input includes, but is not limited to, a power key, a volume key, a single-click input of a main menu key, a double-click input, a long-press input, a combination key input, etc. to the electronic device. The operation mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
And step 604, responding to the second input, and projecting the three-dimensional model according to the rotation angle corresponding to the second input.
In this embodiment, after projecting the three-dimensional model of the target image in the space where the electronic device is located according to the first depth information, the user may control the three-dimensional model to rotate through a second input to the three-dimensional model. Therefore, a user can check the three-dimensional model in an all-around manner, the target position needing to be edited can be selected, the three-dimensional editing processing is further realized, people or objects in the image are more vivid and stereoscopic, a delicate and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
For example, the second input may be a key input of the electronic device by the user, where the key input indicates a specific numerical value of the rotation angle, and the three-dimensional model may be projected through the rotation angle to implement the rotation of the three-dimensional model. The second input may be a key input of the user to the electronic device, a control of the rotation angle is set on a screen of the electronic device, and the user adjusts the projection angle of the three-dimensional model by clicking the control of the rotation angle. In addition, the second input can also be sliding input of the user to the three-dimensional model, the motion trail of the second input is identified through the photosensitive element array, the rotation angle corresponding to the second input is matched through the motion trail, and the three-dimensional model is projected according to the rotation angle.
In an embodiment of the present application, fig. 7 shows a seventh flowchart of an image processing method according to an embodiment of the present application, where before acquiring the first depth information of the target image,step 102, the method further includes:
step 702, displaying at least one depth image;
step 704, receiving a third input for at least one depth image;
in this embodiment, the third input of the user to the at least one depth image may be an input of a finger of the user on the depth image, an input of a touch device such as a stylus on the depth image, or the like.
Step 706, in response to a third input, determining a target image from the at least one depth image.
In this embodiment, at least one depth image is displayed on the screen of the electronic device, and the user may select a target image to be modified through a third input to the at least one depth image.
It is understood that a response function for triggering selection of the third input is defined for the electronic device in advance, the response function indicating that there is at least one rule triggering selection of the target image. And when a third input of the user to the electronic equipment is received, matching the third input with a rule for selecting the target image, and when the first input meets the rule, triggering the operation of determining the target image from the at least one depth image in response to the first input. For example, if the rule is defined as double-clicking the depth image, the depth image is taken as the target image when the user performs the operation of double-clicking the depth image. Of course, the rule may also be to click the depth image and confirm the control, press the depth image for a specified time, and the like, and the embodiment of the present application is not particularly limited.
Specifically, for example, taking sharing pictures from an album as an example, an album interface, that is, a thumbnail display interface of at least one depth picture, is displayed on a desktop of the electronic device, a user clicks a thumbnail to select the depth picture, and after selection, a selected identifier, that is, a "√ shaped mark", is displayed on the thumbnail of the picture, and in addition, the user lightly clicks the thumbnail to enter a large-picture browsing mode of a plurality of thumbnails, so that the thumbnail can be clearly viewed.
In an embodiment of the present application, fig. 8 shows an eighth flowchart of an image processing method according to an embodiment of the present application, wherestep 702, before displaying at least one depth image, further includes:
step 802, receiving a fourth input to the electronic device;
step 804, responding to the fourth input, and starting a depth camera of the electronic equipment;
wherein, the degree of depth camera includes structure light camera and general camera.
Step 806, collecting structured light coding information of a depth camera of the electronic device;
in this embodiment, when the electronic device receives the fourth input, the depth camera is turned on to perform shooting by the depth camera. Wherein, the degree of depth camera includes structure light camera and general camera. The structured light camera may include a structured light projector and a structured light sensor. The structured light camera can project light spots, light slits, gratings, grids or stripes to an object to be detected by using a structured light projector, that is, the structured light can also be generated by using coherent light, grating light, diffraction light and the like. And then, the structured light sensor is adopted to acquire the structured light coding information of the measured object, for example, the coded pattern is modulated by the surface of the measured object.
Specifically, the structured light may be Infrared (IR) light.
By way of specific example, the projector includes a flash lamp or a continuous light source.
808, determining second depth-of-field information according to the structured light coding information;
in this embodiment, since the light with a certain structure is in different depth regions of the object, the structure of the acquired image changes from the original light structure, and then the change of the structure is converted into the second depth-of-field information by performing the distance measurement operation on the structured light encoded information.
For example, the structured light may be encoded by spatial encoding, such as de brui jn sequence encoding; the structured light may also be encoded in an acquisition time coding manner, such as binary coding, gray coding, etc.; the spatial encoding scheme may project only a single preset structured light encoded information, e.g. a single frame of structured light encoded pattern, and the temporal encoding scheme may project a plurality of different preset structured light encoded information, e.g. a plurality of frames of different structured light encoded patterns.
Specifically, for the spatial coding mode, after the collected structured light coded information is decoded, the structured light coded information and the preset structured light coded information are compared to obtain a matching relationship between the two, and the second depth-of-field information is calculated by combining a triangulation distance measurement principle. According to the time coding mode, the structured light sensor can collect a plurality of structured light coded information modulated by the surface of the moving object, the obtained plurality of structured light coded information are decoded, and the second depth of field information is obtained through calculation by combining a triangular distance measurement principle.
And step 810, performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image.
In this embodiment, after the second depth information is obtained, the three-dimensional size (X, Y, Z-axis coordinates) of each pixel point is generated according to the second depth information of each pixel point, and then three-dimensional reconstruction is performed according to the three-dimensional size, so that a depth image can be obtained.
In one embodiment of the present application, as shown in fig. 10, animage processing apparatus 900 includes: the acquiringmodule 902, wherein the acquiringmodule 902 is configured to acquire first depth-of-field information of a target image; aprojection module 904, wherein theprojection module 904 is configured to project a three-dimensional model of the target image in a space where the electronic device is located according to the first depth-of-field information; areceiving module 906, wherein the receivingmodule 906 is used for receiving a first input of a target position of the three-dimensional model; aprocessing module 908, theprocessing module 908 is configured to adjust a three-dimensional size of the target location of the three-dimensional model according to input parameters of the first input in response to the first input.
In the embodiment, the size of an object in the image is accurately identified through the depth information of the image, the three-dimensional model is projected in the space, and a user can modify the numerical value of the three-dimensional size of the three-dimensional model through the operation on the target position of the three-dimensional model, so that the three-dimensional editing processing is realized, people or objects in the image are more vivid and stereoscopic, the delicate and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
Optionally, as shown in fig. 11, theimage processing apparatus 900 further includes: theidentification module 910, theidentification module 910 is configured to identify a motion start point and a motion end point of the first input; a determiningmodule 912, wherein the determiningmodule 912 is configured to determine a displacement between the motion start point and the motion end point; under the condition that the displacement belongs to the preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation; theprocessing module 908 is further configured to adjust the three-dimensional size according to the size change amount.
Optionally, the identifyingmodule 910 is specifically configured to: capturing a projection of a first input on a three-dimensional model; generating a motion track of the first input according to the projection; and determining a motion starting point and a motion ending point according to the motion track.
Optionally, the identifyingmodule 910 is specifically configured to: acquiring brightness data of the three-dimensional model; and determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
Optionally, as shown in fig. 12, theimage processing apparatus 900 further includes: adisplay module 916, wherein thedisplay module 916 is configured to display the numerical value of the three-dimensional size and the size threshold; theprocessing module 908 is further configured to adjust a three-dimensional size according to a target three-dimensional size value corresponding to the first input; wherein the first input is for inputting a target three-dimensional dimension value.
Optionally, the receivingmodule 906 is further configured to receive a second input to the three-dimensional model; theprojection module 904 is further configured to project, in response to the second input, the three-dimensional model according to the rotation angle corresponding to the second input.
Optionally, thedisplay module 916 is further configured to display at least one depth image; the receivingmodule 906 is further configured to receive a third input for the at least one depth image; the obtainingmodule 902 is further configured to determine a target image from the at least one depth image in response to a third input.
Optionally, the receivingmodule 906 is further configured to receive a fourth input to the electronic device; theimage processing apparatus 900 further includes: a starting module (not shown in the figure), which is used for responding to the fourth input and starting the depth camera of the electronic equipment; the acquisition module (not shown in the figure) is used for acquiring the structured light coding information of the depth camera; the obtainingmodule 902 is further configured to determine second depth-of-field information according to the structured light encoding information; and performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image. Wherein, the degree of depth camera includes structure light camera and general camera.
In this embodiment, when each module of theimage processing apparatus 900 executes its respective function, the steps of the image processing method in any of the above embodiments are implemented, so that the image processing apparatus also includes all the beneficial effects of the image processing method in any of the above embodiments, which is not described herein again.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The management device of the application program in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
In one embodiment of the present application, as shown in fig. 13, there is provided anelectronic device 1000 comprising: theprocessor 1004, thememory 1002, and the program or the instructions stored in thememory 1002 and executable on theprocessor 1004 are executed by theprocessor 1004 to implement the steps of the image processing method provided in any of the above embodiments, and therefore, theelectronic device 1000 includes all the advantages of the image processing method provided in any of the above embodiments, which are not described herein again.
Fig. 14 is a block diagram of a hardware structure of anelectronic device 1200 implementing an embodiment of the present application. Theelectronic device 1200 includes, but is not limited to:radio frequency unit 1202,network module 1204,audio output unit 1206,input unit 1208,sensors 1210,display unit 1212,user input unit 1214,interface unit 1216,memory 1218,processor 1220, and the like.
Those skilled in the art will appreciate that theelectronic device 1200 may further comprise a power supply (e.g., a battery) for supplying power to the various components, and the power supply may be logically connected to theprocessor 1220 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or a different arrangement of components. In the embodiment of the present application, the electronic device includes, but is not limited to, a mobile terminal, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Theprocessor 1220 is configured to obtain first depth information of the target image; thedisplay unit 1212 is configured to project a three-dimensional model of the target image in a space where the electronic device is located according to the first depth information; theuser input unit 1214 is used for receiving a first input of a target position of the three-dimensional model;processor 1220 is configured to, in response to the first input, adjust a three-dimensional size of a three-dimensional model target location based on input parameters of the first input.
Processor 1220 is further configured to identify a start point and an end point of the movement for the first input; determining a displacement between a motion starting point and a motion ending point; under the condition that the displacement belongs to the preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation; and adjusting the three-dimensional size according to the size variation.
Further,processor 1220 is also configured to capture a projection of the first input on the three-dimensional model; generating a motion track of the first input according to the projection; and determining a motion starting point and a motion ending point according to the motion track.
Further, theprocessor 1220 is further configured to acquire brightness data of the three-dimensional model; and determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
Further, thedisplay unit 1212 is also configured to display the numerical value of the three-dimensional size and the size threshold;processor 1220 is further configured to adjust the three-dimensional size according to the target three-dimensional size value corresponding to the first input; wherein the first input is for inputting a target three-dimensional dimension value.
Further, theuser input unit 1214 is further configured to receive a second input to the three-dimensional model; thedisplay unit 1212 is further configured to project the three-dimensional model according to a rotation angle corresponding to the second input in response to the second input.
Further, thedisplay unit 1212 is further configured to display at least one depth image; theuser input unit 1214 is further for receiving a third input for the at least one depth image;processor 1220 is further configured to determine a target image from the at least one depth image in response to a third input.
Further, theuser input unit 1214 is also used for receiving a fourth input to the electronic device;processor 1220 is further to turn on a depth camera of the electronic device in response to the fourth input; collecting structured light coding information of a depth camera; determining second depth-of-field information according to the structured light coding information; and performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image.
It should be understood that, in the embodiment of the present application, theradio frequency unit 1202 may be used for transceiving information or transceiving signals during a call, and in particular, receiving downlink data of a base station or sending uplink data to the base station.Radio frequency unit 1202 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
Thenetwork module 1204 provides wireless broadband internet access to the user, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
Theaudio output unit 1206 may convert audio data received by theradio frequency unit 1202 or thenetwork module 1204 or stored in thememory 1218 into an audio signal and output as sound. Also, theaudio output unit 1206 may provide audio output related to a specific function performed by the electronic apparatus 1200 (e.g., a call signal reception sound, a message reception sound, and the like). Theaudio output unit 1206 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 1208 is used to receive audio or video signals. Theinput Unit 1208 may include a Graphics Processing Unit (GPU) 5082 and amicrophone 5084, and theGraphics processor 5082 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on thedisplay unit 1212 or stored in the memory 1218 (or other storage medium) or transmitted via theradio frequency unit 1202 or thenetwork module 1204. Themicrophone 5084 may receive sound and may be capable of processing the sound into audio data, and the processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 1202 in the case of a phone call mode.
Theelectronic device 1200 also includes at least onesensor 1210, such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, a light sensor, a motion sensor, and others.
Thedisplay unit 1212 is used to display information input by the user or information provided to the user. Thedisplay unit 1212 may include adisplay panel 5122, and thedisplay panel 5122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
Theuser input unit 1214 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, theuser input unit 1214 includes atouch panel 5142 andother input devices 5144.Touch panel 5142, also referred to as a touch screen, can collect touch operations by a user on or near it. Thetouch panel 5142 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to theprocessor 1220 to receive and execute commands sent by theprocessor 1220.Other input devices 5144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, thetouch panel 5142 can be overlaid on thedisplay panel 5122, and when thetouch panel 5142 detects a touch operation thereon or nearby, the touch panel is transmitted to theprocessor 1220 to determine the type of touch event, and then theprocessor 1220 provides a corresponding visual output on thedisplay panel 5122 according to the type of touch event. Thetouch panel 5142 and thedisplay panel 5122 can be provided as two separate components or can be integrated into one component.
Theinterface unit 1216 is an interface for connecting an external device to theelectronic apparatus 1200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Theinterface unit 1216 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within theelectronic apparatus 1200 or may be used to transmit data between theelectronic apparatus 1200 and the external device.
Memory 1218 may be used to store application programs as well as various data. Thememory 1218 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. In addition, thememory 1218 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Theprocessor 1220 performs various functions of theelectronic device 1200 and processes data by running or executing applications and/or modules stored within thememory 1218 and by invoking data stored within thememory 1218 to thereby provide an overall monitoring of theelectronic device 1200.Processor 1220 may include one or more processing units; theprocessor 1220 may integrate an application processor, which mainly handles operations of an operating system, a user interface, an application program, etc., and a modem processor, which mainly handles operations of image processing.
In an embodiment of the present application, a readable storage medium is provided, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method as provided in any of the above embodiments.
In this embodiment, the readable storage medium can implement each process of the image processing method provided in the embodiment of the present application, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The processor is the processor in the communication device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

Translated fromChinese
1.一种图像处理方法,其特征在于,包括:1. an image processing method, is characterized in that, comprises:获取目标图像的第一景深信息;Obtain the first depth of field information of the target image;根据所述第一景深信息在电子设备所处的空间投射所述目标图像的三维模型;Projecting the three-dimensional model of the target image in the space where the electronic device is located according to the first depth of field information;接收对所述三维模型目标位置的第一输入;receiving a first input to the target location of the three-dimensional model;响应于所述第一输入,根据所述第一输入的输入参数,调整所述三维模型目标位置的三维尺寸。In response to the first input, the three-dimensional size of the target location of the three-dimensional model is adjusted according to the input parameters of the first input.2.根据权利要求1所述的图像处理方法,其特征在于,所述根据所述第一输入的输入参数,调整所述三维模型目标位置的三维尺寸,包括:2 . The image processing method according to claim 1 , wherein the adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input comprises: 2 .识别所述第一输入的运动起点和运动终点;Identifying the movement start point and movement end point of the first input;确定所述运动起点和所述运动终点之间的位移;determining the displacement between the movement start point and the movement end point;在所述位移属于预设位移区间的情况下,根据预设位移区间和尺寸变化量之间的对应关系,确定所述位移对应的尺寸变化量;In the case that the displacement belongs to a preset displacement interval, according to the corresponding relationship between the preset displacement interval and the dimensional change amount, the dimensional change amount corresponding to the displacement is determined;根据所述尺寸变化量调整所述三维尺寸。The three-dimensional size is adjusted according to the size change.3.根据权利要求2所述的图像处理方法,其特征在于,所述识别所述第一输入的滑动起点和滑动终点,包括:3. The image processing method according to claim 2, wherein the identifying the sliding start point and the sliding end point of the first input comprises:捕捉所述三维模型上所述第一输入的投影;capturing a projection of the first input on the three-dimensional model;根据所述投影生成所述第一输入的运动轨迹;generating a motion trajectory of the first input according to the projection;根据所述运动轨迹,确定所述运动起点和所述运动终点。According to the movement track, the movement start point and the movement end point are determined.4.根据权利要求3所述的图像处理方法,其特征在于,所述捕捉所述三维模型上所述第一输入的投影,包括:4. The image processing method according to claim 3, wherein the capturing the projection of the first input on the three-dimensional model comprises:采集所述三维模型的亮度数据;collecting brightness data of the three-dimensional model;根据小于或等于预设阈值的所述亮度数据对应的位置,确定所述投影。The projection is determined according to a position corresponding to the luminance data that is less than or equal to a preset threshold.5.根据权利要求1所述的图像处理方法,其特征在于,所述根据所述第一输入的输入参数,调整所述三维模型目标位置的三维尺寸,包括:5. The image processing method according to claim 1, wherein the adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input comprises:显示所述三维尺寸的数值和尺寸阈值;displaying the numerical value and size threshold of the three-dimensional size;根据所述第一输入对应的目标三维尺寸数值调整所述三维尺寸;Adjust the three-dimensional size according to the target three-dimensional size value corresponding to the first input;其中,所述第一输入用于输入所述目标三维尺寸数值。Wherein, the first input is used to input the numerical value of the three-dimensional size of the target.6.一种图像处理装置,其特征在于,包括:6. An image processing device, comprising:获取模块,用于获取目标图像的第一景深信息;an acquisition module for acquiring the first depth of field information of the target image;投射模块,用于根据所述第一景深信息在电子设备所处的空间投射所述目标图像的三维模型;a projection module, configured to project the three-dimensional model of the target image in the space where the electronic device is located according to the first depth of field information;接收模块,用于接收对所述三维模型目标位置的第一输入;a receiving module for receiving a first input of the target position of the three-dimensional model;处理模块,用于响应于所述第一输入,根据所述第一输入的输入参数,调整所述三维模型目标位置的三维尺寸。A processing module, configured to adjust the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input in response to the first input.7.根据权利要求6所述的图像处理装置,其特征在于,还包括:7. The image processing device according to claim 6, further comprising:识别模块,用于识别所述第一输入的运动起点和运动终点;an identification module for identifying the movement starting point and the movement end point of the first input;确定模块,用于确定所述运动起点和所述运动终点之间的位移;以及在所述位移属于预设位移区间的情况下,根据预设位移区间和尺寸变化量之间的对应关系,确定所述位移对应的尺寸变化量;A determination module, configured to determine the displacement between the movement starting point and the movement end point; and in the case that the displacement belongs to a preset displacement interval, according to the corresponding relationship between the preset displacement interval and the size change, determine the dimensional change corresponding to the displacement;所述处理模块,还用于根据所述尺寸变化量调整所述三维尺寸。The processing module is further configured to adjust the three-dimensional size according to the size change.8.根据权利要求7所述的图像处理装置,其特征在于,所述识别模块具体用于:8. The image processing device according to claim 7, wherein the identification module is specifically used for:捕捉所述三维模型上所述第一输入的投影;capturing a projection of the first input on the three-dimensional model;根据所述投影生成所述第一输入的运动轨迹;generating a motion trajectory of the first input according to the projection;根据所述运动轨迹,确定所述运动起点和所述运动终点。According to the movement track, the movement start point and the movement end point are determined.9.根据权利要求8所述的图像处理装置,其特征在于,所述识别模块具体用于:9. The image processing device according to claim 8, wherein the identification module is specifically used for:采集所述三维模型的亮度数据;collecting brightness data of the three-dimensional model;根据小于或等于预设阈值的所述亮度数据对应的位置,确定所述投影。The projection is determined according to a position corresponding to the luminance data that is less than or equal to a preset threshold.10.根据权利要求6所述的图像处理装置,其特征在于,还包括:10. The image processing device according to claim 6, further comprising:显示模块,用于显示所述三维尺寸的数值和尺寸阈值;a display module for displaying the numerical value and size threshold of the three-dimensional size;所述处理模块,还用于根据所述第一输入对应的目标三维尺寸数值调整所述三维尺寸;The processing module is further configured to adjust the three-dimensional size according to the target three-dimensional size value corresponding to the first input;其中,所述第一输入用于输入所述目标三维尺寸数值。Wherein, the first input is used to input the numerical value of the three-dimensional size of the target.11.一种电子设备,其特征在于,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至5中任一项所述的图像处理方法的步骤。11. An electronic device, characterized in that it comprises a processor, a memory, and a program or instruction stored on the memory and executable on the processor, and the program or instruction is implemented when executed by the processor The steps of the image processing method according to any one of claims 1 to 5.12.一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5中任一项所述的图像处理方法的步骤。12. A readable storage medium, wherein a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the image according to any one of claims 1 to 5 is realized The steps of the processing method.
CN202011414651.7A2020-12-072020-12-07 Image processing methods, devices, electronic equipment and readable storage mediaActiveCN112529770B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011414651.7ACN112529770B (en)2020-12-072020-12-07 Image processing methods, devices, electronic equipment and readable storage media

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011414651.7ACN112529770B (en)2020-12-072020-12-07 Image processing methods, devices, electronic equipment and readable storage media

Publications (2)

Publication NumberPublication Date
CN112529770Atrue CN112529770A (en)2021-03-19
CN112529770B CN112529770B (en)2024-01-26

Family

ID=74997819

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011414651.7AActiveCN112529770B (en)2020-12-072020-12-07 Image processing methods, devices, electronic equipment and readable storage media

Country Status (1)

CountryLink
CN (1)CN112529770B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113487727A (en)*2021-07-142021-10-08广西民族大学Three-dimensional modeling system, device and method
CN113869215A (en)*2021-09-282021-12-31重庆中科云从科技有限公司Method, system, equipment and medium for marking key points of vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102055991A (en)*2009-10-272011-05-11深圳Tcl新技术有限公司Conversion method and conversion device for converting two-dimensional image into three-dimensional image
EP2347714A1 (en)*2010-01-262011-07-27Medison Co., Ltd.Performing image process and size measurement upon a three-dimensional ultrasound image in an ultrasound system
US20120313946A1 (en)*2011-06-132012-12-13Nobuyuki NakamuraDisplay switching apparatus, display switching method, and display switching program
CN107393017A (en)*2017-08-112017-11-24北京铂石空间科技有限公司Image processing method, device, electronic equipment and storage medium
CN108241434A (en)*2018-01-032018-07-03广东欧珀移动通信有限公司 Human-computer interaction method, device, medium and mobile terminal based on depth of field information
CN108550182A (en)*2018-03-152018-09-18维沃移动通信有限公司 A 3D modeling method and terminal
CN109727191A (en)*2018-12-262019-05-07维沃移动通信有限公司 An image processing method and mobile terminal
CN110908517A (en)*2019-11-292020-03-24维沃移动通信有限公司 Image editing method, apparatus, electronic device and medium
CN111369681A (en)*2020-03-022020-07-03腾讯科技(深圳)有限公司Three-dimensional model reconstruction method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102055991A (en)*2009-10-272011-05-11深圳Tcl新技术有限公司Conversion method and conversion device for converting two-dimensional image into three-dimensional image
EP2347714A1 (en)*2010-01-262011-07-27Medison Co., Ltd.Performing image process and size measurement upon a three-dimensional ultrasound image in an ultrasound system
US20110184290A1 (en)*2010-01-262011-07-28Medison Co., Ltd.Performing image process and size measurement upon a three-dimensional ultrasound image in an ultrasound system
US20120313946A1 (en)*2011-06-132012-12-13Nobuyuki NakamuraDisplay switching apparatus, display switching method, and display switching program
CN107393017A (en)*2017-08-112017-11-24北京铂石空间科技有限公司Image processing method, device, electronic equipment and storage medium
CN108241434A (en)*2018-01-032018-07-03广东欧珀移动通信有限公司 Human-computer interaction method, device, medium and mobile terminal based on depth of field information
CN108550182A (en)*2018-03-152018-09-18维沃移动通信有限公司 A 3D modeling method and terminal
CN109727191A (en)*2018-12-262019-05-07维沃移动通信有限公司 An image processing method and mobile terminal
CN110908517A (en)*2019-11-292020-03-24维沃移动通信有限公司 Image editing method, apparatus, electronic device and medium
CN111369681A (en)*2020-03-022020-07-03腾讯科技(深圳)有限公司Three-dimensional model reconstruction method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113487727A (en)*2021-07-142021-10-08广西民族大学Three-dimensional modeling system, device and method
CN113487727B (en)*2021-07-142022-09-02广西民族大学Three-dimensional modeling system, device and method
CN113869215A (en)*2021-09-282021-12-31重庆中科云从科技有限公司Method, system, equipment and medium for marking key points of vehicle

Also Published As

Publication numberPublication date
CN112529770B (en)2024-01-26

Similar Documents

PublicationPublication DateTitle
CN111417028B (en)Information processing method, information processing device, storage medium and electronic equipment
CN110456907A (en) Virtual screen control method, device, terminal equipment and storage medium
CN108108114B (en)A kind of thumbnail display control method and mobile terminal
KR102114377B1 (en)Method for previewing images captured by electronic device and the electronic device therefor
CN107592466B (en)Photographing method and mobile terminal
CN109348135A (en)Photographing method and device, storage medium and terminal equipment
CN109510940B (en)Image display method and terminal equipment
CN108712603B (en) An image processing method and mobile terminal
KR20130088104A (en)Mobile apparatus and method for providing touch-free interface
CN107835367A (en)A kind of image processing method, device and mobile terminal
JP2015526927A (en) Context-driven adjustment of camera parameters
WO2019174628A1 (en)Photographing method and mobile terminal
CN112669381B (en)Pose determination method and device, electronic equipment and storage medium
CN107277481A (en)A kind of image processing method and mobile terminal
CN111045511A (en)Gesture-based control method and terminal equipment
CN112581571B (en)Control method and device for virtual image model, electronic equipment and storage medium
CN108347558A (en)A kind of method, apparatus and mobile terminal of image optimization
CN111432123A (en) Image processing method and device
CN108683850A (en) A shooting prompt method and mobile terminal
CN112529770A (en)Image processing method, image processing device, electronic equipment and readable storage medium
CN110908517B (en) Image editing method, device, electronic device and medium
CN111083374B (en)Filter adding method and electronic equipment
CN108989666A (en)Image pickup method, device, mobile terminal and computer-readable storage medium
CN108322639A (en)A kind of method, apparatus and mobile terminal of image procossing
CN109639981B (en)Image shooting method and mobile terminal

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp