Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
For convenience of understanding, terms referred to in the embodiments of the present application are explained below.
Depth of field: the distance range between the front and rear of the subject measured at the front edge of the camera lens or other imaging device, where a sharp image can be obtained, is referred to as the range of the front and rear distances of the subject. In other words, after the focusing is completed, a sharp image appears in a range before and after the focal point, and this range of distance in front and behind is called the depth of field.
Focusing: focusing is also called focusing. The focusing mechanism of the camera changes the positions of the object distance and the image distance, so that the process of imaging the shot object clearly is focusing. Generally, a digital camera has a plurality of focusing modes, which are an auto-focusing mode, a manual focusing mode and a multi-focusing mode, and the focusing mode related to the embodiment of the present application is an auto-focusing mode.
Blurring (digital camera photographing technology): in general, blurring refers to blurring of a background, which is to make a depth of field shallow so that a focus is focused on a subject, and blurring of an object other than the subject, where the subject may be a subject object selected by a shooting user in a finder frame. Background blurring is directly related to depth of field, and when a subject object is clear and the background is fuzzy, the scene depth is called shallow; if the background blurring is not obvious, i.e. the subject is as clear as the background, it is called depth of field.
The image processing method provided by the embodiment of the application is used for a terminal provided with a single camera, the terminal can be an electronic device such as a mobile phone, a tablet computer, a laptop portable computer, a desktop computer and the like, and the camera can be a front camera or a rear camera of the terminal.
Referring to fig. 1, a flowchart of an image processing method for a terminal provided with a single camera according to an exemplary embodiment of the present application is shown, where the method includes:
step 101, in the process of framing, object distance information corresponding to each scene in a framing picture is obtained.
The view-finding picture is a picture in the shooting range of a single camera in the terminal, and the view-finding picture is changed in the shooting range of the camera along with the change of the angle of the terminal. For the terminal, each view frame is composed of pixel points.
In a possible implementation manner, when the terminal stays in a certain view, the terminal focuses on each pixel point in the view. And when the focusing of the pixel point is successful, the terminal determines the object distance information when the focusing of the pixel point is successful. The object distance information refers to the distance of the optical center of the scene lens component corresponding to each pixel point when the focusing of each pixel point is successful.
Optionally, the scenery is composed of pixel points under the same object distance information, or each pixel point not exceeding an object distance difference threshold. When the scenery is composed of all pixel points which do not exceed the object distance difference threshold, the object distance information of all the scenery acquired by the terminal is the average object distance information of the pixel points in all the scenery.
In an illustrative example, the object distance information is accurate to centimeter units and the viewfinder frame consists of 100 pixels. The object distance information from pixel point 1 to pixel point 10 is 1.00 m, the object distance information from pixel point 11 to pixel point 50 is 1.50 m, the object distance information from pixel point 51 to 70 is 1.55 m, and the object distance information from pixel point 71 to 100 is 2.00 m. Setting the object distance difference threshold value to be 10 cm, the view-finding picture is composed of 3 scenes: the scene 1 consists of pixel points 1 to 10, and the object distance information corresponding to the scene 1 is 1 meter; scene 2 is composed of pixel 11 to pixel 70, and the object distance information corresponding to scene 2 is 1.52 meters (i.e. the average object distance information from pixel 11 to pixel 70); scene 3 is composed of pixel points 71 to 100, and the object distance information corresponding to scene 1 is 2.00 m.
Optionally, in the framing process, the terminal acquires object distance information corresponding to each scene in the framing picture, and stores the object distance information corresponding to each scene in the terminal.
And 102, when a shooting instruction is received, storing the object distance information and the shot image in a correlation mode.
When the terminal receives a shooting instruction, a single camera arranged in the terminal acquires each pixel point in the view finding picture, and an image formed by each pixel point in the view finding picture is stored as a shot image.
In a possible embodiment, the pixels in the captured image with the same object distance information form a scene, and the captured image is composed of a plurality of scenes. When the shot image is stored, the terminal acquires object distance information corresponding to each scene in the shot image and stores the object distance information, and when the object distance information is stored, the terminal associates the object distance information with the scenes corresponding to the object distance information in the shot image one by one, so that the object distance information corresponding to each scene is stored in association with the shot image by the terminal. In an illustrative example, as shown in table one, the object distance information is accurate to centimeter units, and the terminal obtains a shot image composed of 100 pixels after receiving the shooting instruction. When the shot image is stored, the terminal stores the object distance information and the scenery corresponding to the object distance information in a list mode in an associated mode, for example, each pixel point at the position of 1.00 m of the object distance information forms the scenery 1, and correspondingly, the object distance information of 1.00 m and the scenery 1 are stored in the list in an associated mode.
Watch 1
| Object distance information | Scenery |
| 1.00 m | Scenery 1 (Pixel 1 to pixel 10) |
| 1.50 m | Scenery 2 (Pixel 11 to 50) |
| 1.55 m | Scenery 3 (Pixel 51 to pixel 70) |
| 2.00 m | Scenery 4 (Pixel 71 to 100) |
Therefore, when the post-processing of the shot image is required, the operation of associating and storing the terminal is beneficial to the rapid post-processing of the shot image.
Step 103, receiving the area selection operation of the shot image.
The region selection operation is used to determine a focus region in a captured image. Optionally, the area selection operation may be a touch operation on any image area in the captured image, where the touch operation may be a single-click operation, a double-click operation, a press operation, a long-press operation, and the like, and a specific form of the area selection operation is not limited in this embodiment of the application.
In one possible implementation, the terminal determines a focus area in the captured image according to the received area selection operation.
In a possible implementation mode, a user clicks a shot image through a finger, wherein a contact part of the finger and the shot image is a click area, a terminal acquires object distance information corresponding to each pixel point in the click area, and determines a scene containing the click area according to the object distance information, wherein the scene is composed of the pixel points.
In an illustrative example, the captured image includes three persons (person 1, person 2, and person 3), where person 1 and person 2 stand side by side in the captured image (i.e. the object distance information of person 1 and person 2 is the same), and person 3 stands two meters behind persons 1 and 2 (i.e. the object distance information of person 3 is different from the object distance information of person 1 and person 2), when the user clicks on a local area of person 1, the terminal acquires a focus area including person 1 and person 2 according to the click area of the user, and the specific process of acquiring the focus area is as follows: the terminal obtains target object distance information of a user click area, obtains pixel points consistent with the target object distance information from all pre-stored object distance information of a shot image, and finally determines all the pixel points with the object distance information consistent with the target object distance information as focus areas.
Optionally, when it is determined that one scene containing the click area is present, determining the scene as a focal area of the shot image; and when at least two scenes containing the click areas are determined, the terminal compares the number of pixel points in the click areas contained in each scene, and determines the scene containing the largest number of the pixel points in the click areas as the focus area.
In one possible implementation mode, the terminal receives the area selection operation of the shot image for a plurality of times, then a plurality of focus areas of the shot image are determined, and the subsequent steps are executed among the focus areas in parallel.
And 104, blurring the shot image according to the focal region and the object distance information.
In a possible implementation mode, the shot image is composed of a plurality of scenes, and in the process of processing the shot image, the terminal acquires the scenes except the focal point region in the shot image according to the object distance information of the focal point region so as to blurring the scenes except the focal point region in the shot image in order to ensure the definition of the focal point region in the shot image.
Wherein, the definition is related to the distance between each pixel point and the optical center of the lens component. The distance from each pixel point to the optical center of the lens assembly is different, the definition of each pixel point is also different, the distance from the scenery corresponding to the pixel point of the focus area to the optical center of the lens assembly is the same distance under the object distance information, the definition of the focus area is the highest, namely, the definition of the focus area in the shot image can be reserved by the terminal through the object distance information of the focus area.
In a possible implementation manner, the definition of the focus area may be maintained by determining whether the definition of each pixel point in the focus area is greater than a preset threshold. Optionally, in the related art, the gray value of each pixel point in the image may be substituted into the Brenner gradient function, and when the function value is greater than the preset threshold, the image is clear. In the embodiment of the present application, an algorithm for determining the image sharpness is not limited.
To sum up, in the embodiment of the present application, the terminal obtains object distance information corresponding to each scene in a view-finding picture in the view-finding process, and stores the object distance information and a shot image in association with each other under a shooting instruction, so that the shot image can be rapidly post-processed according to the object distance information of the shot image.
Referring to fig. 2, a flowchart of an image processing method for a terminal provided with a single camera according to another exemplary embodiment of the present application is shown, where the method includes:
step 201, in the process of finding a view, driving a lens assembly of the camera to move through a driving assembly, and focusing each scene in the view-finding picture in the moving process.
In one possible embodiment, the drive assembly includes a voice coil motor, which is a device that converts electrical energy into mechanical energy. In the embodiment of the application, the voice coil motor is used for driving the lens assembly to move so as to complete focusing. The main principle is that in a permanent magnetic field, the extension position of a spring piece is controlled by changing the direct current of a coil in a voice coil motor, so that a lens assembly is driven to move to realize the automatic focusing function, and all scenes in a view-finding picture are gradually clear.
Step 202, when the focusing of the target scenery in the framing picture is successful, determining image distance information according to the lens position of the lens assembly.
In the process of focusing each scenery in a viewing picture, when the driving assembly drives the lens assembly to move to different positions, the scenery at different object distances can be successfully focused.
In a possible implementation manner, the terminal records a scene which is successfully focused and the lens position of the lens assembly through the image processing chip, and determines the image distance information of the scene according to the lens position of the lens assembly, wherein the image distance information of the scene is the distance from the lens assembly to the image sensor.
The image sensor converts a light image on a light sensing surface into an electric signal in a corresponding proportional relation with the light image by utilizing a photoelectric conversion function of an optoelectronic device, and particularly divides the light image on the light receiving surface into a plurality of small units and converts the small units into usable electric signals.
In a possible implementation manner, the terminal stores the scenes (corresponding pixel points) and the image distance information of the scenes in a one-to-one correspondence manner through the image processing chip.
And step 203, determining object distance information corresponding to the target scenery according to the focal length and image distance information of the lens assembly.
In a possible implementation mode, when focusing of each scene is successful, the terminal can obtain the one-to-one correspondence between the object distance information and the image distance information of the scene according to the principle of optical imaging. The optical imaging principle corresponds to the formula
1/f-1/v +1/u formula (2-1)
In the formula (2-1), f is focal length information, v is image distance information, and u is object distance information. Where f is an inherent parameter in the terminal relating to the camera assembly, i.e. f is known and is the same for each scene.
V is obtained from step 202 when the focusing of each still object is successful, and f is known, so that u of the target scene can be deduced according to the formula (2-1), and finally the object distance information of all the scenes in the viewfinder is obtained.
In an illustrative example, as shown in fig. 3, the viewfinder frame contains 3 scenes (S1, S2, and S3), then during the moving of the lens assembly 302 into focus, the image distance information (v1, v2, and v3) of each scene in focus is determined, and the terminal determines the object distance information (u1, u2, and u3) of all scenes in the viewfinder frame according to the known focal length information f and the image distance information of each scene. Wherein S1 ', S2 ' and S3 ' are the images of S1, S2 and S3 at the image sensor 302, respectively, and the shorter the object distance of the subject, the smaller the imaging scale thereof, wherein the imaging scale refers to the ratio of the size of the subject to the image of the subject.
In one possible implementation, the terminal stores object distance information of all scenes in the viewfinder picture through the image processing chip.
And step 204, when a shooting instruction is received, storing the object distance information and the shot image in a correlation mode.
The step 102 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 205, receiving a region selection operation for the captured image.
The step 103 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
And step 206, acquiring the focal object distance information corresponding to the focal area and the non-focal object distance information corresponding to the non-focal area.
In order to ensure that the image of any designated focus area is clear when the shot image is subjected to post-processing, in a possible implementation mode, the shot image is a hyperfocal distance image, and the hyperfocal distance image is the shot image corresponding to the minimum image distance information. As can be seen from the formula (2-1) and fig. 3, the focal length information of the scenes is inversely proportional to the object distance information, the terminal obtains the minimum image distance information that can be adjusted by the lens assembly according to the precision information of the lens assembly, that is, the position of the lens assembly is adjusted to the position when the lens assembly is closest to the image sensor, so that the focusable range of the lens assembly includes the scenes at the maximum object distance, that is, all the scenes in the viewfinder frame reach the requirement of sharpness, and when the terminal receives the region selection operation for any image region in the shot image, each scene in the focus region determined according to the region selection operation is sharp.
In one possible implementation mode, after the terminal determines the focus area, the focus object distance information corresponding to the focus area and the non-focus object distance information corresponding to the non-focus area are acquired from the image processing chip. Wherein the non-focus area is an area of the captured image other than the focus area.
And step 207, determining the blurring degree of the non-focus area according to the focus object distance information and the non-focus object distance information.
In the actual image post-processing process, in order to realize the natural transition between the focus area and the non-focus area, different blurring degrees of each scene in the non-focus area are realized according to different distances between each scene in the non-focus area and the focus area.
In one possible implementation, after acquiring the focal object distance information and the non-focal object distance information, the terminal acquires the object distance difference between each scene in the non-focal region and the focal region through the image processing chip, and the blurring degree of each scene in the non-focal region may increase with the increase of the object distance difference from the focal region. In the non-focus area, the closer the scenery is to the focus area, the smaller the blurring degree is; the farther away from the focus area the more it is blurred.
In an illustrative example, the object distance of the focus area is 2m, and the non-focus area includes 4 scenes (scene 1, still 2, scene 3, and scene 4, respectively), and the object distances thereof are divided into 1.5m, 2.3m, 3m, and 5m, and the object distance differences are 0.5m, 0.3m, 1m, and 3m, respectively. Then the virtual degree of each scene in the non-focus area is scene 2, scene 1, scene 3 and scene 4 from weak to strong, i.e. the virtual degree of scene 2 is the weakest and the virtual degree of scene 4 is the strongest.
In a possible implementation manner, when the focus area comprises a plurality of scenes, the terminal acquires the scene with the largest object distance in the focus area, and acquires the object distance difference between each scene and the scene in the non-focus area through the image processing chip.
And step 208, blurring the non-focus area according to the blurring degree.
Optionally, the blurring process is performed on the non-focus region, and the blurred non-focus region may be obtained by performing gaussian blurring on the non-focus region. The algorithm involved in the blurring process is not limited in the embodiment of the present application.
In a possible implementation mode, different scenes in the non-focus area correspond to different fuzzy radiuses, and the fuzzy radiuses corresponding to the scenes are brought into a Gaussian fuzzy algorithm to obtain fuzzy results of the different scenes in the non-focus area, so that the non-focus area presents visual effects of different blurring degrees of the different scenes. The embodiment of the present application does not limit the blurring algorithm involved in the blurring process.
Illustratively, as shown in fig. 4, a region 401 outlines a focus region in which an image effect presented by a subject having consistent object distance information is clearly clear and a non-focus region, and in fig. 4, the region other than the region 401 is the non-focus region. In the non-focus area, we can observe that the farther the scenery from the focus area, the deeper the blurring degree of the scenery, and the closer the scenery from the focus area, the shallower the blurring degree of the scenery, as shown in the table and the window in fig. 4, a part of the table is closer to the focus area, the blurring effect is not obvious, and the blurring effect of the window farther from the focus area is obvious.
In the embodiment of the application, the object distance information of each scene is determined through an optical imaging principle, for a focus area, the terminal determines the focus object distance information, for a non-focus area, the terminal determines the non-focus object distance information, so that the terminal determines the blurring degree of the non-focus area according to the focus object distance information and the non-focus object distance information.
In the process of blurring the non-focus area, the blurring processing method provided by the embodiment is based on the blurring algorithm, and another blurring processing method may be provided by the embodiment of the present application, which does not need to involve a complex blurring algorithm and can fully utilize the optical characteristics of each scene under the lens assembly, so that the blurring processing effect of the captured image is closer to the natural blurring effect.
Referring to fig. 5, a flowchart of an image processing method for a terminal provided with a single camera according to another exemplary embodiment of the present application is shown, where the method includes:
step 501, in the process of framing, driving a lens assembly of the camera to move through a driving assembly, and focusing each scene in the framing picture in the moving process.
The step 201 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 502, when the focusing of the target scenery in the framing picture is successful, determining the image distance information according to the lens position of the lens assembly.
The step 202 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
And step 503, in the framing process, shooting the candidate images according to the preset shooting frequency and storing the candidate images.
In one possible implementation mode, during the framing process, the terminal shoots and stores images successfully focused at different image distances according to a preset shooting frequency, and records the images as candidate images. Wherein, different candidate images correspond to different candidate focusing scenes, and the candidate focusing scenes are scenes which are successfully focused in the candidate images.
In an illustrative example, the distance between the lens assembly and the image sensor is 8mm, the moving focusing speed of the lens assembly is 4mm/s, the shooting frequency is set to be 10 images/s, and the terminal can acquire and store 20 candidate images during the process that the lens assembly completes one round of focusing.
In a possible implementation, step 503 is executed in parallel with steps 501 to 502, that is, while the object distance information of each scene in the viewfinder frame is acquired, candidate images are captured according to a preset capture frequency and stored, so as to realize focusing and blurring at any scene of the captured images at a later stage.
And step 504, when the shooting instruction is received, storing the object distance information and the shot image in a correlation mode.
The step 102 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 505, receiving a region selection operation for the captured image.
The step 103 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 506, determining a target image from the candidate images according to the focus scene corresponding to the focus area.
In one possible embodiment, the terminal does not receive the instruction related to the area selection operation before receiving the photographing instruction, and the photographed image acquired by the terminal after receiving the photographing instruction is a short-focus image.
In a possible implementation manner, the terminal determines a focus area according to the area selection operation, and determines a target image from the candidate images according to the object distance information of the focus scenery corresponding to the focus area, wherein the target image comprises scenery identical to the object distance information of the focus scenery and is recorded as a candidate focusing scenery. In this case, the focused scene candidate in the target image is matched with the focused scene, that is, the object distance information of the focused scene candidate is consistent with the object distance information of the focused scene (or the similarity is greater than the similarity threshold).
In an illustrative example, the lens assembly is in the process of completing a round of focusing, and the terminal acquires and stores 3 candidate images. Illustratively, as shown in fig. 6, the area 610 illustrates the 3 candidate images (candidate image 611, candidate image 612, and candidate image 613), wherein the finder screen mainly includes 3 scenes (S1, S2, and S3 from far to near according to the object distance information). During the lens shift, the 3 candidate images respectively include scenes with successful focusing and scenes without successful focusing. As in the candidate image 611, the scene S3 is successfully focused (the shaded area in the figure is regarded as the area successfully focused), and the scenes at S1, S2 and the rest of the background are not successfully focused under the moving position of the lens assembly, thereby presenting a natural blurring effect; if the scene S2 is successfully focused, the scenes S1, S3 and the remaining background are not successfully focused in the moving position of the lens assembly, and a natural blurring effect is presented in the candidate image 612; as in the candidate image 613, the scene S1 is successfully focused, and the scenes at S2, S3 and the remaining background are not successfully focused in the lens assembly moving position, thereby exhibiting a natural blurring effect.
Illustratively, as shown in fig. 6, the area 620 illustrates that the terminal receives an area selection operation for a captured image (short-focus image) 621 and determines a focus area 622 according to the area selection operation. In the process of determining the focus area 622, the terminal first determines, according to the click area 623 where the user has completed the area selection operation, the focus area 622 that is consistent with the object distance information of the click area, where the focus area 622 includes the click area 623, and a focus scene corresponding to the focus area 622 is consistent with the scene S2 in which focusing is successful in the candidate image 612, and then determines the candidate image 612 as the target image.
In step 507, the captured image is replaced with a target image.
In one possible embodiment, the terminal replaces the captured image with the target image after determining the target image from the candidate images based on the focus scene corresponding to the focus area. The target image not only includes the focus area successfully focused, but also the non-focus area is naturally blurred according to the optical characteristics of the lens assembly, so that no blurring process is required to be performed on the non-focus area after step 507.
In a possible implementation mode, the terminal receives the instruction related to the area selection operation before receiving the shooting instruction, and the terminal does not need to shoot a short-focus image and directly determines the target image from the candidate images according to the area selection operation.
In a possible embodiment, regardless of whether the terminal receives an instruction related to the area selection operation before receiving the photographing instruction, when the terminal receives the area selection operation for the target image again, the terminal continues to perform the logical operations related to steps 505 to 507. If the terminal receives the region selection operation on the target image (the first acquired target image is marked as the first target image), the execution content is as follows: receiving a region selection operation on a first target image; determining a second target image from the candidate images according to the focus scenery corresponding to the focus area; the first target image is replaced with a second target image.
In the embodiment of the application, on the basis of the original method for realizing the blurring effect of the non-focus area, another blurring processing method is provided, that is, the terminal shoots and stores the candidate image according to the preset shooting frequency, and when receiving the area selection operation on the shot image, determines the target image from the candidate image according to the focus area under the area selection operation, wherein the target image includes both a clear focus area and the non-focus area for realizing the blurring effect. The method does not need to relate to a complex blurring algorithm and can fully utilize the optical characteristics of each scene under the lens assembly, so that the blurring processing effect of the shot image is closer to the natural blurring effect.
In an actual image processing process, in order to meet the requirement of free adjustment of a user on a clear range and a non-focus area blurring range of an image focus area, the embodiment of the application further includes a step of performing depth-of-field processing on the shot image.
Referring to fig. 7, which shows a flowchart of an image processing method according to another exemplary embodiment of the present application, the method is applied to a terminal provided with a single camera, and after step 208 and step 505, the method further includes:
in step 701, a depth of field processing instruction is received.
In a possible implementation manner, the terminal is provided with a depth-of-field progress bar in a display screen on which the shot image is displayed, and the user can adjust the depth of field through a button on the depth-of-field progress bar. And the terminal generates a depth of field processing instruction according to the dragging operation of the user on the depth of field progress bar. Wherein, the depth of field processing instruction comprises depth of field.
And step 702, performing depth of field processing on the shot image according to the depth of field processing instruction and the object distance information.
In one possible implementation, step 702 includes:
firstly, determining a depth of field range according to the depth of field and the information of the focal object distance corresponding to the focal area.
The depth of field range is determined according to the depth of field and the focal object distance information corresponding to the focal area.
In one possible embodiment, the dragging distance of the depth of field progress bar is proportional to the depth of field. If the ratio of the dragging distance of the depth of field progress bar to the depth of field is set to be 1: 50, the information of the focal object distance corresponding to the focal area is U (cm), the dragging distance of the user on the depth of field progress bar is 1cm, the depth of field is determined to be 50cm according to the ratio of 1: 50, and the depth of field is [ U-50, U +50 ]. Namely, the scene with the object distance information between [ U-50 and U +50] has the depth of field effect.
And secondly, determining the scene in the depth of field and the scene outside the depth of field in the shot image according to the depth of field range.
The scene within the depth of field is located within the depth of field range, and the scene outside the depth of field is located outside the depth of field range.
In the above illustrative example, it can be seen that the depth of field range is [ U-50, U +50], and then the scenes with object distance information between [ U-50, U +50] are intra-depth of field scenes, i.e. the intra-depth of field scenes are located within the depth of field range, and correspondingly, the out-of-depth of field scenes are located outside the depth of field range, i.e. the scenes with object distance information outside [ U-50, U +50] are out-of-depth of field scenes.
And thirdly, blurring the scenery outside the scene depth.
In a possible implementation manner, the scenery within the depth of field achieves the depth of field effect on the basis of the original focus area, and for the scenery outside the depth of field, the blurring processing is performed, wherein the way of blurring processing the scenery outside the depth of field can refer to the above-mentioned embodiment, and this embodiment is not described herein again.
In an illustrative example, as shown in fig. 8, the terminal is performing depth processing on a captured image 810. Wherein the user can adjust the depth of field via the button 821 on the depth of field progress bar 820. The terminal receives the adjustment operation of the depth of field progress bar 820 by the user and acquires the depth of field from the adjustment operation, thereby determining the depth of field range. As shown in fig. 8, the depth-processed captured image 810' has an increased amount of clear objects compared to the captured image 810 without depth processing, but objects with longer object distances still exhibit blurring effects. The terminal determines a clear scene according to the depth of field range, namely the scene in the depth of field; the terminal virtualizes the scenes except for the clear scenes in the shot image 810 according to the depth of field, that is, the scenes outside the depth of field, so that the terminal acquires the shot image 810' with the depth of field effect.
In the embodiment of the application, the terminal determines the depth of field according to the depth of field processing instruction, determines the depth of field range of the shot image according to the depth of field and the focal object distance information corresponding to the focal region, realizes the depth of field effect of the scene within the depth of field range, and executes the blurring operation of the scene outside the depth of field range, so that the user can freely adjust the clear range and the non-focal region blurring range of the focal region of the image, and the depth of field effect is realized on the basis of the original focal region.
Referring to fig. 9, a block diagram of an image processing apparatus according to an embodiment of the present application is shown. The apparatus may be implemented as all or part of a computer device in software, hardware, or a combination of both. The device includes:
an information obtaining module 901, configured to obtain object distance information corresponding to each scene in a view-finding picture in a view-finding process;
a data storage module 902, configured to, when a shooting instruction is received, store the object distance information in association with a shot image;
an operation receiving module 903, configured to receive a region selection operation on the captured image, where the region selection operation is used to indicate a focus region in the captured image;
and an image processing module 904, configured to perform blurring processing on the captured image according to the focal region and the object distance information.
The information obtaining module 901 includes:
the scene focusing sub-module is used for driving a lens assembly of the camera to move through a driving assembly in the framing process and focusing each scene in the framing picture in the moving process;
the image distance determining submodule is used for determining image distance information according to the lens position of the lens assembly when the target scenery in the framing picture is successfully focused;
and the object distance determining submodule is used for determining the object distance information corresponding to the target scenery according to the image distance information.
Optionally, the object distance determining submodule is configured to determine the object distance information corresponding to the target scene according to the focal length of the lens assembly and the image distance information.
Optionally, the shot image is a hyperfocal distance image;
the image processing module 904 comprises:
the object distance acquisition submodule is used for acquiring focal object distance information corresponding to the focal area and non-focal object distance information corresponding to a non-focal area, and the non-focal area is an area of the shot image except the focal area;
the area blurring submodule is used for determining the blurring degree of the non-focus area according to the focus object distance information and the non-focus object distance information;
and the first blurring submodule is used for blurring the non-focus area according to the blurring degree.
Optionally, the apparatus further includes:
the image shooting module is used for shooting and storing candidate images according to a preset shooting frequency in a framing process, wherein different candidate images correspond to different candidate focusing scenes, and the candidate focusing scenes are scenes which are focused successfully in the candidate images;
optionally, the captured image is a hyperfocal distance image, and the image processing module 904 includes:
an image determining sub-module, configured to determine a target image from the candidate images according to a focus scene corresponding to the focus area, where a candidate focus scene in the target image matches the focus scene;
and the image replacing submodule is used for replacing the shot image with the target image.
Optionally, the apparatus further includes:
the instruction receiving module is used for receiving a depth of field processing instruction;
and the field depth processing module is used for carrying out field depth processing on the shot image according to the field depth processing instruction and the object distance information.
Optionally, the depth of field processing instruction includes a depth of field;
the depth of field processing module comprises:
the depth of field determining submodule is used for determining a depth of field range according to the depth of field and the focal object distance information corresponding to the focal area;
the scene determining submodule is used for determining the scene in the depth of field and the scene out of the depth of field in the shot image according to the depth of field range, wherein the scene in the depth of field is positioned in the depth of field range, and the scene out of the depth of field is positioned outside the depth of field range;
and the second blurring submodule is used for blurring the scene depth outdoor scene objects.
Referring to fig. 10, a block diagram of a terminal 1000 according to an exemplary embodiment of the present application is shown. The terminal 1000 can be an electronic device installed and running with an application, such as a smart phone, a tablet computer, an electronic book, a portable personal computer, and the like. Terminal 1000 in the present application can include one or more of the following: processor 1100, memory 1200, and screen 1300.
Processor 1100 may include one or more processing cores. Processor 1100 interfaces with various portions throughout terminal 1000 using various interfaces and circuitry to perform various functions of terminal 1000 and process data by executing or performing instructions, programs, code sets, or instruction sets stored in memory 1200 and invoking data stored in memory 1200. Alternatively, the processor 1100 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1100 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is responsible for rendering and drawing the content that the screen 1300 needs to display; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 1100, but may be implemented by a communication chip.
The Memory 1200 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1200 includes a non-transitory computer-readable medium. The memory 1200 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1200 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, and the like), instructions for implementing the above method embodiments, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The stored data area can also store data created by terminal 1000 in use (e.g., phonebook, audio-video data, chat log data), and the like.
The screen 1300 may be a touch display screen for receiving a touch operation of a user thereon or nearby using any suitable object such as a finger, a touch pen, or the like, and displaying a user interface of each application. The touch display screen is typically provided on the front panel of terminal 1000. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configuration of terminal 1000 illustrated in the above-described figures is not intended to be limiting, and that terminal 1000 can include more or less components than those illustrated, or some components can be combined, or a different arrangement of components. For example, the terminal 1000 further includes a radio frequency circuit, a shooting component, a sensor, an audio circuit, a Wireless Fidelity (WiFi) component, a power supply, a bluetooth component, and other components, which are not described herein again.
The embodiment of the present application further provides a computer-readable medium, which stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the image processing method according to the above embodiments.
The embodiment of the present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded and executed by the processor to implement the image processing method according to the above embodiments.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.