Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present invention relates to an image processing method including:
step 101: the method comprises the steps of obtaining a first image and a second image which are shot aiming at the same target object, and dividing the first image and the second image into N detection areas respectively according to the same rule. The first image is an image obtained by controlling the camera before the flash lamp is started, the second image is an image obtained by controlling the camera after the flash lamp is started, and N is a natural number greater than 1.
Specifically, the terminal may acquire the shooting parameters in advance in a preview state when the camera is turned on, and control the camera to shoot the same target object according to the acquired shooting parameters, and may control the camera to acquire the first image before the flash is turned on and control the camera to acquire the second image after the flash is turned on. After the first image and the second image are obtained, the terminal can also divide the first image and the second image into N detection areas respectively according to the same preset rule, wherein N is a natural number greater than 1, and the N detection areas in the first image and the second image correspond to each other one by one.
For example, after the terminal controls the camera to capture the first image and the second image for the same target object, the first image and the second image may be equally divided into 90 detection regions having the same size.
Step 102: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 103: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
For example, the terminal may number, in advance, 90 detection regions in the first image, which correspond to the second image in a one-to-one manner, from 1 to 90, and after acquiring luminance differences of the corresponding 1 st to 90 th detection regions, the terminal may determine a detection region having a luminance difference smaller than a preset threshold as a background region, and determine a detection region having a luminance difference greater than or equal to the preset threshold as a foreground region. For example, the terminal may determine that the first image and the second image respectively have 60 detection regions with types of background regions and 30 detection regions with types of foreground regions, which are in one-to-one correspondence.
Compared with the prior art, the method and the device for detecting the target object can control the camera to acquire the first image before the flash lamp is started and control the camera to acquire the second image after the flash lamp is started aiming at the same target object, and can divide the first image and the second image into N detection areas respectively according to the same rule. Furthermore, by acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one, the type of each detection area can be determined. In this embodiment, the type of each detection region can be directly determined based on the luminance difference between the corresponding detection regions in the first image and the second image, and the foreground and background regions in the first image and the second image can be automatically identified. Meanwhile, the terminal can directly and automatically identify the foreground and background areas in the image without additionally arranging two cameras, so that the hardware cost can be reduced.
A second embodiment of the present invention relates to an image processing method. The second embodiment is further improved on the basis of the first embodiment, and the main improvement is that: in the second embodiment of the present invention, after determining the type of each detection area, images in the detection areas of which the types are a foreground area and a background area may be copied to two image layers, and then the two image layers are synthesized, so that an image with a depth of field effect may be obtained, as shown in fig. 2, the method includes:
step 201: the method comprises the steps of obtaining a first image and a second image which are shot aiming at the same target object, and dividing the first image and the second image into N detection areas respectively according to the same rule. The first image is an image obtained by controlling the camera before the flash lamp is started, the second image is an image obtained by controlling the camera after the flash lamp is started, and N is a natural number greater than 1.
Specifically, the terminal may acquire the shooting parameters in advance in a preview state when the camera is turned on, and control the camera to shoot the same target object according to the acquired shooting parameters, and may control the camera to acquire the first image before the flash is turned on and control the camera to acquire the second image after the flash is turned on. After the first image and the second image are obtained, the terminal can also divide the first image and the second image into N detection areas respectively according to the same preset rule, wherein N is a natural number greater than 1, and the N detection areas in the first image and the second image correspond to each other one by one.
For example, after the terminal controls the camera to capture the first image and the second image for the same target object, the first image and the second image may be equally divided into 90 detection regions having the same size.
Step 202: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 203: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
For example, the terminal may number, in advance, 90 detection regions in the first image, which correspond to the second image in a one-to-one manner, from 1 to 90, and after acquiring luminance differences of the corresponding 1 st to 90 th detection regions, the terminal may determine a detection region having a luminance difference smaller than a preset threshold as a background region, and determine a detection region having a luminance difference greater than or equal to the preset threshold as a foreground region. For example, the terminal may determine that the first image and the second image respectively have 60 detection regions with types of background regions and 30 detection regions with types of foreground regions, which are in one-to-one correspondence.
Step 204: and copying the image in the detection area of which the type is the foreground area in the first image or the second image onto the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area, which is a foreground area in the first image or the second image, onto the first image layer.
Step 205: and copying the image in the detection area of which the type is the background area in the first image or the second image onto the second image layer.
Specifically, the terminal may establish a second image layer, and may copy an image in a detection area, which is a background area in the first image or the second image, onto the second image layer.
Step 206: and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers.
Specifically, the terminal may synthesize the first layer and the second layer by covering the first layer on the second layer, so as to obtain a synthesized image.
For example, the terminal may copy an image in a detection area of a foreground area in the first image onto the first layer, copy an image in a detection area of a background area in the second image onto the second layer, and may synthesize the first layer and the second layer by overlaying the first layer onto the second layer, thereby obtaining a synthesized image.
In the embodiment of the invention, the images in the detection areas with the types of the foreground area and the background area are respectively copied to the two image layers, and the two image layers are synthesized to obtain the synthesized image. In this way, the two image layers are superposed and synthesized, so that the image in the detection area with the foreground type and the image in the detection area with the background type in the synthesized image have different visual distances, and the image with the depth of field effect can be obtained.
A third embodiment of the present invention relates to an image processing method. The third embodiment is further optimized on the basis of the first embodiment, and the main optimization is as follows: in the third embodiment of the present invention, the terminal may detect the illumination intensity in advance, and may select one image from the first image and the second image according to the magnitude relationship between the detected illumination intensity and the preset critical value, so that the images in the detection areas, which are of the foreground area and the background area, in the selected image may be copied onto the two image layers, respectively, and then the two image layers are synthesized, so that the obtained synthesized image is clearer, and the depth of field effect is better. As shown in fig. 3, includes:
step 301: and detecting the illumination intensity.
Specifically, the terminal may detect the current illumination intensity in advance, and may acquire the illumination intensity while saving the acquired illumination intensity.
Step 302: and judging whether the detected illumination intensity is smaller than a preset critical value or not.
Specifically, the terminal may compare the detected illumination intensity with a preset critical value, and may performstep 303 when the detected illumination intensity is less than the preset critical value, and performstep 309 when the detected illumination intensity is greater than or equal to the preset critical value.
Step 303: and acquiring the first image and the second image by using the second shooting parameter, wherein the second shooting parameter is the shooting parameter acquired in a preview state when a flash lamp is in pre-flash.
Specifically, when the detected illumination intensity is less than a preset critical value, the terminal may acquire the second photographing parameter in a preview state at the time of flash pre-flash, and may acquire the first image and the second image photographed for the same target object with the second photographing parameter. After the first image and the second image are obtained, the terminal may further divide the first image and the second image into N detection regions respectively according to the same rule, where N is a natural number greater than 1. For example, in the embodiment of the present invention, N may be 90.
Step 304: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 305: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
Step 306: and copying the image in the detection area of which the type is the foreground area in the second image onto the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area of the foreground area type in the second image onto the first image layer.
Step 307: and copying the image in the detection area of which the type is the background area in the second image onto the second image layer.
Specifically, the terminal may establish a second image layer, and may copy an image in a detection area of the type of the background area in the second image onto the second image layer.
Step 308: and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers.
Specifically, the terminal may synthesize the first layer and the second layer by covering the first layer on the second layer, so as to obtain a synthesized image.
Step 309: and acquiring a first image and a second image which are shot aiming at the same target object by using the first shooting parameters. The first shooting parameter is a shooting parameter acquired in a preview state before the flash is started.
Specifically, when the detected light intensity is greater than or equal to a preset critical value, the terminal may acquire a first photographing parameter in a preview state before the flash is turned on, and may acquire a first image and a second image photographed for the same target object with the first photographing parameter. After the first image and the second image are obtained, the terminal may further divide the first image and the second image into N detection regions respectively according to the same rule, where N is a natural number greater than 1. For example, in the embodiment of the present invention, N may be 90.
Step 310: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 311: and determining the type of each detection area according to the acquired brightness difference.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
Step 312: and copying the image in the detection area with the foreground area type in the first image onto the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area of the foreground area type in the first image onto the first image layer.
Step 313: and copying the image in the detection area with the type of the background area in the first image onto the second image layer.
Specifically, the terminal may establish a second layer, and may copy an image in the detection area of the type of the background area in the first image onto the second layer, after which step 308 may be continued.
In the embodiment of the invention, when the detected illumination intensity is greater than or equal to the preset critical value, the first image and the second image are obtained by the first shooting parameter, and when the detected illumination intensity is less than the preset critical value, the first image and the second image are obtained by the second shooting parameter, so that the obtained first image and the obtained second image are clearer. According to the size relation between the detected illumination intensity and the preset threshold value, the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the first image are respectively copied to the two image layers, or the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the second image are respectively copied to the two image layers, so that the images on the two image layers to be synthesized are ensured to be from the same image, the synthesized image is clearer, and the depth of field effect is better.
A fourth embodiment of the present invention relates to an image processing method. The fourth embodiment is further improved on the basis of the second embodiment, and the main improvement lies in that: in the fourth embodiment of the present invention, after copying the image in the detection area of the background area to the second layer and before combining the first layer and the second layer, the terminal may further perform a blurring process on the image in the second layer, so that the image in the detection area of the foreground area in the combined image is more prominent and the depth of field effect is better. As shown in fig. 4, includes:
step 401: the method comprises the steps of obtaining a first image and a second image which are shot aiming at the same target object, and dividing the first image and the second image into N detection areas respectively according to the same rule. The first image is an image obtained by controlling the camera before the flash lamp is started, the second image is an image obtained by controlling the camera after the flash lamp is started, and N is a natural number greater than 1.
Specifically, the terminal may acquire the shooting parameters in advance in a preview state when the camera is turned on, and control the camera to shoot the same target object according to the acquired shooting parameters, and may control the camera to acquire the first image before the flash is turned on and control the camera to acquire the second image after the flash is turned on. After the first image and the second image are obtained, the terminal can also divide the first image and the second image into N detection areas respectively according to the same preset rule, wherein N is a natural number greater than 1, and the N detection areas in the first image and the second image correspond to each other one by one.
For example, after the terminal controls the camera to capture the first image and the second image for the same target object, the first image and the second image may be equally divided into 90 detection regions having the same size.
Step 402: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 403: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
Step 404: and copying the image in the detection area with the type of the foreground area to the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area, which is a foreground area in the first image or the second image, onto the first image layer.
Step 405: and copying the image in the detection area with the type of the background area to the second image layer.
Specifically, the terminal may establish a second image layer, and may copy an image in a detection area, which is a background area in the first image or the second image, onto the second image layer.
Step 406: an information box including an ambiguity level pops up.
Specifically, after copying the image in the detection area, of which the type is the background area, in the first image or the second image onto the second image layer, the terminal may pop up an information frame including a blur level, so that the terminal may perform blur processing on the image on the second image layer.
Step 407: and acquiring the information of the fuzzy grade input by the user, and performing fuzzy processing on the image on the second image layer according to the information of the fuzzy grade input by the user.
Specifically, after popping up the information box including the blur level, the terminal may acquire information of the blur level input by the user, and may perform the blur processing on the image on the second layer according to the information of the blur level input by the user.
Step 408: and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers.
Specifically, the terminal may synthesize the first layer and the second layer by covering the first layer on the second layer, so as to obtain a synthesized image.
For example, the terminal may copy an image in a detection area of a foreground area in the first image onto the first layer, copy an image in a detection area of a background area in the second image onto the second layer, perform blurring processing on the image on the second layer according to information of a blurring level input by a user, and then synthesize the first layer and the second layer by overlaying the first layer on the second layer, thereby obtaining a synthesized image.
In the embodiment of the invention, the image in the detection area with the type of the background area on the second layer is subjected to blurring processing according to the acquired information of the blurring level input by the user, so that the image in the detection area with the type of the background area in the image synthesized by the layers is blurred, and the image in the detection area with the type of the foreground area in the image synthesized by the layers is clear, so that the image in the detection area with the type of the foreground area in the image is more prominent, the layers of the foreground area and the background area are more obvious, and the depth of field effect of the image is better.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the steps contain the same logical relationship, which is within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fifth embodiment of the present invention relates to an image processing apparatus including an acquisition module, a calculation module, and a determination module, as shown in fig. 5.
Theimage processing apparatus 500 includes an obtainingmodule 501, a calculatingmodule 502, and a determiningmodule 503.
The obtainingmodule 501 may be configured to obtain a first image and a second image captured by the same target object, and divide the first image and the second image into N detection regions according to the same rule, where the first image is an image obtained by controlling a camera before a flash is turned on, the second image is an image obtained by controlling the camera after the flash is turned on, and N is a natural number greater than 1.
Specifically, the obtainingmodule 501 may obtain the shooting parameters in a preview state when the camera is turned on in advance, control the camera to shoot the same target object according to the obtained shooting parameters, and control the camera to obtain the first image before the flash is turned on and control the camera to obtain the second image after the flash is turned on. After the first image and the second image are obtained, the obtainingmodule 501 may further divide the first image and the second image into N detection regions according to the same preset rule, where N is a natural number greater than 1, and the N detection regions in the first image and the second image correspond to each other one by one.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection area according to the acquired brightness difference, where the type includes a foreground area and a background area.
Specifically, the determiningmodule 503 may compare the acquired brightness difference with a preset threshold, and if the acquired brightness difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired brightness difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a foreground area.
Compared with the prior art, the method and the device for detecting the target object can control the camera to acquire the first image before the flash lamp is started and control the camera to acquire the second image after the flash lamp is started aiming at the same target object, and can divide the first image and the second image into N detection areas respectively according to the same rule. Furthermore, by acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one, the type of each detection area can be determined. In this embodiment, the type of each detection region can be directly determined based on the luminance difference between the corresponding detection regions in the first image and the second image, and the foreground and background regions in the first image and the second image can be automatically identified. Meanwhile, the terminal can directly and automatically identify the foreground and background areas in the image without additionally arranging two cameras, so that the hardware cost can be reduced.
It should be understood that this embodiment is an example of the apparatus corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
A sixth embodiment of the present invention relates to an image processing apparatus. The sixth embodiment is a further improvement on the fifth embodiment, and the main improvement lies in that: in the sixth embodiment, the image processing apparatus further includes an image copying module and an image synthesizing module, as shown in fig. 6.
Theimage processing apparatus 500 includes anacquisition module 501, acalculation module 502, adetermination module 503, animage copying module 504, and animage synthesis module 505.
The obtainingmodule 501 may be configured to obtain a first image and a second image captured by the same target object, and divide the first image and the second image into N detection regions according to the same rule, where the first image is an image obtained by controlling a camera before a flash is turned on, the second image is an image obtained by controlling the camera after the flash is turned on, and N is a natural number greater than 1.
Specifically, the obtainingmodule 501 may obtain the shooting parameters in a preview state when the camera is turned on in advance, control the camera to shoot the same target object according to the obtained shooting parameters, and control the camera to obtain the first image before the flash is turned on and control the camera to obtain the second image after the flash is turned on. After the first image and the second image are obtained, the obtainingmodule 501 may further divide the first image and the second image into N detection regions according to the same preset rule, where N is a natural number greater than 1, and the N detection regions in the first image and the second image correspond to each other one by one.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection region according to the acquired brightness difference, where the type includes a foreground region and a background region, and the determiningmodule 503 further includes a threshold comparing sub-module 5031 and atype detecting sub-module 5032.
The threshold comparison sub-module 5031 can be configured to compare the brightness difference of the corresponding detection regions in the first image and the second image obtained by thecalculation module 502 with a preset threshold.
Thetype detection sub-module 5032 may be configured to determine the type of the detection region as a background region when the comparison result of the threshold comparison sub-module 5031 is that the brightness difference is smaller than a preset threshold, and determine the type of the detection region as a foreground region when the comparison result of the threshold comparison sub-module is that the brightness difference is greater than or equal to the preset threshold.
Theimage copying module 504 may be configured to copy an image in the detection area with the type of foreground area onto the first image layer, and copy an image in the detection area with the type of background area onto the second image layer.
Specifically, theimage copying module 504 may be specifically configured to copy an image in the detection area, which is of the foreground area in the first image or the second image, onto the first image layer, and copy an image in the detection area, which is of the background area in the first image or the second image, onto the second image layer.
Theimage synthesis module 505 may be configured to synthesize the first image layer and the second image layer to obtain an image after the image layers are synthesized.
Specifically, theimage synthesis module 505 may synthesize the first image layer and the second image layer by overlaying the first image layer on the second image layer, so as to obtain a synthesized image.
In the embodiment of the invention, the images in the detection areas with the types of the foreground area and the background area are respectively copied to the two image layers, and the two image layers are synthesized to obtain the synthesized image. In this way, the two image layers are superposed and synthesized, so that the image in the detection area with the foreground type and the image in the detection area with the background type in the synthesized image have different visual distances, and the image with the depth of field effect can be obtained.
It should be understood that this embodiment is an example of the apparatus corresponding to the second embodiment, and that this embodiment can be implemented in cooperation with the second embodiment. The related technical details mentioned in the second embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the second embodiment.
A seventh embodiment of the present invention relates to an image processing apparatus. The seventh embodiment is further optimized on the basis of the fifth embodiment, and the main optimization is as follows: in the seventh embodiment of the present invention, the image processing apparatus further includes a detection module and a threshold comparison module, as shown in fig. 7 and 8.
Theimage processing apparatus 500 includes anacquisition module 501, acalculation module 502, adetermination module 503, animage copying module 504, animage synthesis module 505, adetection module 506, and a thresholdvalue comparison module 507.
Thedetection module 506 may be used to detect the illumination intensity.
Specifically, the detectingmodule 506 may detect the current illumination intensity in advance, and may acquire the illumination intensity while saving the acquired illumination intensity.
Thethreshold comparison module 507 may be configured to compare the illumination intensity detected by thedetection module 506 with a preset threshold.
Theacquisition module 501 includes afirst shooting submodule 5011 and asecond shooting submodule 5012.
The first photographing sub-module 5011 may be configured to acquire a first image and a second image photographed for a same target object by using a first photographing parameter when the comparison result of the criticalvalue comparing module 507 is that the detected illumination intensity is greater than or equal to a preset critical value, and may divide the first image and the second image into N detection regions respectively by using a same rule, where N is a natural number greater than 1, and the first photographing parameter is a photographing parameter acquired in a preview state before the flash is turned on.
The second photographing sub-module 5012 may be configured to obtain a first image and a second image photographed for the same target object by using a second photographing parameter when the comparison result of the criticalvalue comparing module 507 is that the detected illumination intensity is smaller than a preset critical value, and divide the first image and the second image into N detection regions by using the same rule, where N is a natural number greater than 1, and the second photographing parameter is a photographing parameter obtained in a preview state when a flash is pre-flashed.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection area according to the acquired brightness difference, where the type includes a foreground area and a background area.
Specifically, the determiningmodule 503 may compare the acquired brightness difference with a preset threshold, and if the acquired brightness difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired brightness difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a foreground area.
Theimage replication module 504 includes a foreground regionimage replication sub-module 5041 and a background regionimage replication sub-module 5042.
The foreground regionimage replication module 5041 may be configured to copy, when the comparison result of the criticalvalue comparison module 507 is that the detected illumination intensity is greater than or equal to a preset critical value, an image in the detection region of the foreground region in the first image to the first image layer, and when the comparison result of the criticalvalue comparison module 507 is that the detected illumination intensity is less than the preset critical value, copy, to the first image layer, an image in the detection region of the foreground region in the second image.
The background region image copying sub-module 5042 may be configured to copy, when the comparison result of the thresholdvalue comparing module 507 is that the detected illumination intensity is greater than or equal to a preset threshold value, an image in the detection region of the first image that is of the background region to the second image layer, and when the comparison result of the thresholdvalue comparing module 507 is that the detected illumination intensity is less than the preset threshold value, copy, to the second image layer, an image in the detection region of the second image that is of the background region.
Theimage synthesis module 505 may be configured to synthesize the first image layer and the second image layer to obtain an image after the image layers are synthesized.
Specifically, theimage synthesis module 505 may synthesize the first image layer and the second image layer by overlaying the first image layer on the second image layer, so as to obtain a synthesized image.
In the embodiment of the invention, when the detected illumination intensity is greater than or equal to the preset critical value, the first image and the second image are obtained by the first shooting parameter, and when the detected illumination intensity is less than the preset critical value, the first image and the second image are obtained by the second shooting parameter, so that the obtained first image and the obtained second image are clearer. According to the size relation between the detected illumination intensity and the preset threshold value, the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the first image are respectively copied to the two image layers, or the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the second image are respectively copied to the two image layers, so that the images on the two image layers to be synthesized are ensured to be from the same image, the synthesized image is clearer, and the depth of field effect is better.
It should be understood that this embodiment is an example of an apparatus corresponding to the third embodiment, and that this embodiment can be implemented in cooperation with the third embodiment. The related technical details mentioned in the third embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the third embodiment.
An eighth embodiment of the present invention relates to an image processing apparatus. The eighth embodiment is a further improvement on the sixth embodiment, and the main improvement is that: in the eighth embodiment of the present invention, the image processing apparatus further includes an information frame presenting module and an image blurring processing module, as shown in fig. 9.
Theimage processing apparatus 500 includes anacquisition module 501, acalculation module 502, adetermination module 503, animage copying module 504, animage synthesis module 505, an informationframe presentation module 508, and an imageblur processing module 509.
The obtainingmodule 501 may be configured to obtain a first image and a second image captured by the same target object, and divide the first image and the second image into N detection regions according to the same rule, where the first image is an image obtained by controlling a camera before a flash is turned on, the second image is an image obtained by controlling the camera after the flash is turned on, and N is a natural number greater than 1.
Specifically, the obtainingmodule 501 may obtain the shooting parameters in a preview state when the camera is turned on in advance, control the camera to shoot the same target object according to the obtained shooting parameters, and control the camera to obtain the first image before the flash is turned on and control the camera to obtain the second image after the flash is turned on. After the first image and the second image are obtained, the obtainingmodule 501 may further divide the first image and the second image into N detection regions according to the same preset rule, where N is a natural number greater than 1, and the N detection regions in the first image and the second image correspond to each other one by one.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection area according to the acquired brightness difference, where the type includes a foreground area and a background area.
Specifically, the determiningmodule 503 may compare the acquired brightness difference with a preset threshold, if the acquired brightness difference is smaller than the preset threshold, the determiningmodule 503 may determine that the type of the detection area is a foreground area, and if the acquired brightness difference is greater than or equal to the preset threshold, the determiningmodule 503 may determine that the type of the detection area is a foreground area.
Theimage copying module 504 may be configured to copy an image in the detection area with the type of foreground area onto the first image layer, and copy an image in the detection area with the type of background area onto the second image layer. Theimage copying module 504 may be further configured to trigger the informationframe prompting module 508 after copying the image in the detection area of the background area to the second layer.
The information boxprompt module 508 may be used to pop up an information box including a blur level after being triggered by theimage copy module 504.
Theimage blurring module 509 may be configured to obtain information of a blurring level input by a user, and perform blurring processing on an image on the second layer according to the information of the blurring level input by the user.
Specifically, after the information boxprompt module 508 pops up an information box including a blur level, the imageblur processing module 509 may acquire information of the blur level input by the user, and may perform blur processing on the image on the second layer according to the information of the blur level input by the user.
Theimage synthesis module 505 may be configured to synthesize the first image layer and the second image layer to obtain an image after the image layers are synthesized.
Specifically, theimage synthesis module 505 may synthesize the first image layer and the second image layer by overlaying the first image layer on the second image layer, so as to obtain a synthesized image.
In the embodiment of the invention, the image in the detection area with the type of the background area on the second layer is subjected to blurring processing according to the acquired information of the blurring level input by the user, so that the image in the detection area with the type of the background area in the image synthesized by the layers is blurred, and the image in the detection area with the type of the foreground area in the image synthesized by the layers is clear, so that the image in the detection area with the type of the foreground area in the image is more prominent, the layers of the foreground area and the background area are more obvious, and the depth of field effect of the image is better.
It should be understood that this embodiment is an example of an apparatus corresponding to the fourth embodiment, and that this embodiment can be implemented in cooperation with the fourth embodiment. The related technical details mentioned in the fourth embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the fourth embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.