Movatterモバイル変換


[0]ホーム

URL:


CN107392087B - Image processing method and device - Google Patents

Image processing method and device
Download PDF

Info

Publication number
CN107392087B
CN107392087BCN201710392619.5ACN201710392619ACN107392087BCN 107392087 BCN107392087 BCN 107392087BCN 201710392619 ACN201710392619 ACN 201710392619ACN 107392087 BCN107392087 BCN 107392087B
Authority
CN
China
Prior art keywords
image
type
module
area
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710392619.5A
Other languages
Chinese (zh)
Other versions
CN107392087A (en
Inventor
钱捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqin Technology Co Ltd
Original Assignee
Huaqin Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqin Technology Co LtdfiledCriticalHuaqin Technology Co Ltd
Priority to CN201710392619.5ApriorityCriticalpatent/CN107392087B/en
Publication of CN107392087ApublicationCriticalpatent/CN107392087A/en
Application grantedgrantedCritical
Publication of CN107392087BpublicationCriticalpatent/CN107392087B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the technical field of image processing, and discloses an image processing method and device. The image processing method comprises the following steps: acquiring a first image and a second image which are shot aiming at the same target object, wherein the first image is an image acquired by controlling a camera before a flash lamp is started, and the second image is an image acquired by controlling the camera after the flash lamp is started; dividing the first image and the second image into N detection areas respectively according to the same rule; n is a natural number greater than 1; acquiring brightness differences of corresponding detection areas in the first image and the second image one by one; and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area. The embodiment of the invention also discloses an image processing device. Therefore, the foreground and background areas of the image can be automatically identified, and meanwhile, the hardware cost can be reduced.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
At present, a camera becomes a standard configuration of most mobile terminals, and along with the continuous improvement of the performance of the camera, the photographing function in the mobile terminal is more and more powerful, which brings convenience to users who like photographing.
In the prior art, due to the limitation of the camera and the sensor hardware, the mobile terminal is difficult to shoot the photo with the shallow depth of field, in order to shoot the photo with the shallow depth of field, part of the mobile terminals adopt the design of double cameras, and the focal length is adjusted through the double cameras, so that the photo with the shallow depth of field is shot.
However, the inventors found that: on the one hand, in the prior art, when a mobile terminal without two cameras is used for taking a picture, the foreground area and the background area of the obtained image cannot be automatically identified, so that the taken picture usually has no depth of field effect. On the other hand, in the prior art, although the mobile terminal with two cameras can capture a picture with a depth of field effect, the design of the two cameras increases the hardware cost.
Disclosure of Invention
An object of embodiments of the present invention is to provide an image processing method and apparatus, which can automatically identify foreground and background regions of an image and reduce hardware cost.
To solve the above technical problem, an embodiment of the present invention provides an image processing method, including: acquiring a first image and a second image which are shot aiming at the same target object, wherein the first image is an image acquired by controlling a camera before a flash lamp is started, and the second image is an image acquired by controlling the camera after the flash lamp is started; dividing the first image and the second image into N detection areas respectively according to the same rule; n is a natural number greater than 1; acquiring brightness differences of corresponding detection areas in the first image and the second image one by one; and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
An embodiment of the present invention also provides an image processing apparatus including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image and a second image which are obtained by shooting aiming at the same target object, the first image is obtained by controlling a camera before a flash lamp is started, and the second image is obtained by controlling the camera after the flash lamp is started; dividing the first image and the second image into N detection areas respectively according to the same rule; n is a natural number greater than 1; the calculation module is used for acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one; and the determining module is used for determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Compared with the prior art, the method and the device for detecting the target object can control the camera to acquire the first image before the flash lamp is started and control the camera to acquire the second image after the flash lamp is started aiming at the same target object, and can divide the first image and the second image into N detection areas respectively according to the same rule. Furthermore, by acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one, the type of each detection area can be determined. In this embodiment, the type of each detection region can be directly determined based on the luminance difference between the corresponding detection regions in the first image and the second image, and the foreground and background regions in the first image and the second image can be automatically identified. Meanwhile, the terminal can directly and automatically identify the foreground and background areas in the image without additionally arranging two cameras, so that the hardware cost can be reduced.
In addition, before acquiring the first image and the second image captured for the same target object, the method further includes: detecting the illumination intensity; acquiring a first image and a second image which are shot aiming at the same target object, and specifically comprising the following steps: if the detected illumination intensity is larger than or equal to a preset critical value, acquiring a first image and a second image which are shot aiming at the same target object by using a first shooting parameter; the first shooting parameter is a shooting parameter acquired in a preview state before the flash lamp is started; if the detected illumination intensity is smaller than a preset critical value, acquiring a first image and a second image according to a second shooting parameter; and the second shooting parameter is the shooting parameter acquired in a preview state during the pre-flash of the flash lamp. In the embodiment of the invention, when the detected illumination intensity is greater than or equal to the preset critical value, the first image and the second image are obtained by the first shooting parameter, and when the detected illumination intensity is less than the preset critical value, the first image and the second image are obtained by the second shooting parameter, so that the obtained first image and the obtained second image are clearer.
In addition, after determining the type of each detection area according to the acquired brightness difference, the method further includes: copying an image in a detection area with the type of a foreground area to a first image layer; copying the image in the detection area with the type of the background area to a second image layer; and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers. In the embodiment of the invention, the images in the detection areas with the types of the foreground area and the background area are respectively copied to the two image layers, and the two image layers are synthesized to obtain the synthesized image. In this way, the two image layers are superposed and synthesized, so that the image in the detection area with the foreground type and the image in the detection area with the background type in the synthesized image have different visual distances, and the image with the depth of field effect can be obtained.
In addition, before acquiring the first image and the second image captured for the same target object, the method further includes: detecting the illumination intensity; copying an image in a detection area with a foreground area type onto a first image layer, specifically comprising: if the detected illumination intensity is larger than or equal to a preset critical value, copying an image in a detection area with the type of a foreground area in the first image onto a first image layer; if the detected illumination intensity is smaller than a preset critical value, copying an image in a detection area with the type of a foreground area in the second image onto the first image layer; copying the image in the detection area with the type of the background area to a second image layer, specifically comprising: if the detected illumination intensity is larger than or equal to a preset critical value, copying an image in a detection area of which the type is a background area in the first image onto a second image layer; and if the detected illumination intensity is smaller than a preset critical value, copying the image in the detection area with the type of the background area in the second image onto the second image layer. In the embodiment of the invention, according to the size relationship between the detected illumination intensity and the preset threshold, the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the first image can be respectively copied onto the two image layers, or the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the second image can be respectively copied onto the two image layers, so that the images on the two image layers to be synthesized can be ensured to be from the same image, the synthesized image is clearer, and the depth of field effect is better.
In addition, after copying the image in the detection area with the type of background area to the second image layer and before synthesizing the first image layer and the second image layer, the method further comprises: popping up an information box comprising the fuzzy grade; and acquiring the information of the fuzzy grade input by the user, and performing fuzzy processing on the image on the second image layer according to the information of the fuzzy grade input by the user. In the embodiment of the invention, the image in the detection area with the type of the background area on the second layer is subjected to blurring processing according to the acquired information of the blurring level input by the user, so that the image in the detection area with the type of the background area in the image synthesized by the layers is blurred, and the image in the detection area with the type of the foreground area in the image synthesized by the layers is clear, so that the image in the detection area with the type of the foreground area in the image is more prominent, the layers of the foreground area and the background area are more obvious, and the depth of field effect of the image is better.
Drawings
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of an image processing method according to a second embodiment of the present invention;
fig. 3 is a flowchart of an image processing method according to a third embodiment of the present invention;
fig. 4 is a flowchart of an image processing method according to a fourth embodiment of the present invention;
fig. 5 is a schematic configuration diagram of an image processing apparatus according to a fifth embodiment of the present invention;
fig. 6 is a schematic configuration diagram of an image processing apparatus according to a sixth embodiment of the present invention;
fig. 7 is a schematic configuration diagram of an image processing apparatus according to a seventh embodiment of the present invention;
fig. 8 is a schematic structural diagram of an acquisition module according to a seventh embodiment of the present invention;
fig. 9 is a schematic configuration diagram of an image processing apparatus according to an eighth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present invention relates to an image processing method including:
step 101: the method comprises the steps of obtaining a first image and a second image which are shot aiming at the same target object, and dividing the first image and the second image into N detection areas respectively according to the same rule. The first image is an image obtained by controlling the camera before the flash lamp is started, the second image is an image obtained by controlling the camera after the flash lamp is started, and N is a natural number greater than 1.
Specifically, the terminal may acquire the shooting parameters in advance in a preview state when the camera is turned on, and control the camera to shoot the same target object according to the acquired shooting parameters, and may control the camera to acquire the first image before the flash is turned on and control the camera to acquire the second image after the flash is turned on. After the first image and the second image are obtained, the terminal can also divide the first image and the second image into N detection areas respectively according to the same preset rule, wherein N is a natural number greater than 1, and the N detection areas in the first image and the second image correspond to each other one by one.
For example, after the terminal controls the camera to capture the first image and the second image for the same target object, the first image and the second image may be equally divided into 90 detection regions having the same size.
Step 102: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 103: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
For example, the terminal may number, in advance, 90 detection regions in the first image, which correspond to the second image in a one-to-one manner, from 1 to 90, and after acquiring luminance differences of the corresponding 1 st to 90 th detection regions, the terminal may determine a detection region having a luminance difference smaller than a preset threshold as a background region, and determine a detection region having a luminance difference greater than or equal to the preset threshold as a foreground region. For example, the terminal may determine that the first image and the second image respectively have 60 detection regions with types of background regions and 30 detection regions with types of foreground regions, which are in one-to-one correspondence.
Compared with the prior art, the method and the device for detecting the target object can control the camera to acquire the first image before the flash lamp is started and control the camera to acquire the second image after the flash lamp is started aiming at the same target object, and can divide the first image and the second image into N detection areas respectively according to the same rule. Furthermore, by acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one, the type of each detection area can be determined. In this embodiment, the type of each detection region can be directly determined based on the luminance difference between the corresponding detection regions in the first image and the second image, and the foreground and background regions in the first image and the second image can be automatically identified. Meanwhile, the terminal can directly and automatically identify the foreground and background areas in the image without additionally arranging two cameras, so that the hardware cost can be reduced.
A second embodiment of the present invention relates to an image processing method. The second embodiment is further improved on the basis of the first embodiment, and the main improvement is that: in the second embodiment of the present invention, after determining the type of each detection area, images in the detection areas of which the types are a foreground area and a background area may be copied to two image layers, and then the two image layers are synthesized, so that an image with a depth of field effect may be obtained, as shown in fig. 2, the method includes:
step 201: the method comprises the steps of obtaining a first image and a second image which are shot aiming at the same target object, and dividing the first image and the second image into N detection areas respectively according to the same rule. The first image is an image obtained by controlling the camera before the flash lamp is started, the second image is an image obtained by controlling the camera after the flash lamp is started, and N is a natural number greater than 1.
Specifically, the terminal may acquire the shooting parameters in advance in a preview state when the camera is turned on, and control the camera to shoot the same target object according to the acquired shooting parameters, and may control the camera to acquire the first image before the flash is turned on and control the camera to acquire the second image after the flash is turned on. After the first image and the second image are obtained, the terminal can also divide the first image and the second image into N detection areas respectively according to the same preset rule, wherein N is a natural number greater than 1, and the N detection areas in the first image and the second image correspond to each other one by one.
For example, after the terminal controls the camera to capture the first image and the second image for the same target object, the first image and the second image may be equally divided into 90 detection regions having the same size.
Step 202: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 203: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
For example, the terminal may number, in advance, 90 detection regions in the first image, which correspond to the second image in a one-to-one manner, from 1 to 90, and after acquiring luminance differences of the corresponding 1 st to 90 th detection regions, the terminal may determine a detection region having a luminance difference smaller than a preset threshold as a background region, and determine a detection region having a luminance difference greater than or equal to the preset threshold as a foreground region. For example, the terminal may determine that the first image and the second image respectively have 60 detection regions with types of background regions and 30 detection regions with types of foreground regions, which are in one-to-one correspondence.
Step 204: and copying the image in the detection area of which the type is the foreground area in the first image or the second image onto the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area, which is a foreground area in the first image or the second image, onto the first image layer.
Step 205: and copying the image in the detection area of which the type is the background area in the first image or the second image onto the second image layer.
Specifically, the terminal may establish a second image layer, and may copy an image in a detection area, which is a background area in the first image or the second image, onto the second image layer.
Step 206: and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers.
Specifically, the terminal may synthesize the first layer and the second layer by covering the first layer on the second layer, so as to obtain a synthesized image.
For example, the terminal may copy an image in a detection area of a foreground area in the first image onto the first layer, copy an image in a detection area of a background area in the second image onto the second layer, and may synthesize the first layer and the second layer by overlaying the first layer onto the second layer, thereby obtaining a synthesized image.
In the embodiment of the invention, the images in the detection areas with the types of the foreground area and the background area are respectively copied to the two image layers, and the two image layers are synthesized to obtain the synthesized image. In this way, the two image layers are superposed and synthesized, so that the image in the detection area with the foreground type and the image in the detection area with the background type in the synthesized image have different visual distances, and the image with the depth of field effect can be obtained.
A third embodiment of the present invention relates to an image processing method. The third embodiment is further optimized on the basis of the first embodiment, and the main optimization is as follows: in the third embodiment of the present invention, the terminal may detect the illumination intensity in advance, and may select one image from the first image and the second image according to the magnitude relationship between the detected illumination intensity and the preset critical value, so that the images in the detection areas, which are of the foreground area and the background area, in the selected image may be copied onto the two image layers, respectively, and then the two image layers are synthesized, so that the obtained synthesized image is clearer, and the depth of field effect is better. As shown in fig. 3, includes:
step 301: and detecting the illumination intensity.
Specifically, the terminal may detect the current illumination intensity in advance, and may acquire the illumination intensity while saving the acquired illumination intensity.
Step 302: and judging whether the detected illumination intensity is smaller than a preset critical value or not.
Specifically, the terminal may compare the detected illumination intensity with a preset critical value, and may performstep 303 when the detected illumination intensity is less than the preset critical value, and performstep 309 when the detected illumination intensity is greater than or equal to the preset critical value.
Step 303: and acquiring the first image and the second image by using the second shooting parameter, wherein the second shooting parameter is the shooting parameter acquired in a preview state when a flash lamp is in pre-flash.
Specifically, when the detected illumination intensity is less than a preset critical value, the terminal may acquire the second photographing parameter in a preview state at the time of flash pre-flash, and may acquire the first image and the second image photographed for the same target object with the second photographing parameter. After the first image and the second image are obtained, the terminal may further divide the first image and the second image into N detection regions respectively according to the same rule, where N is a natural number greater than 1. For example, in the embodiment of the present invention, N may be 90.
Step 304: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 305: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
Step 306: and copying the image in the detection area of which the type is the foreground area in the second image onto the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area of the foreground area type in the second image onto the first image layer.
Step 307: and copying the image in the detection area of which the type is the background area in the second image onto the second image layer.
Specifically, the terminal may establish a second image layer, and may copy an image in a detection area of the type of the background area in the second image onto the second image layer.
Step 308: and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers.
Specifically, the terminal may synthesize the first layer and the second layer by covering the first layer on the second layer, so as to obtain a synthesized image.
Step 309: and acquiring a first image and a second image which are shot aiming at the same target object by using the first shooting parameters. The first shooting parameter is a shooting parameter acquired in a preview state before the flash is started.
Specifically, when the detected light intensity is greater than or equal to a preset critical value, the terminal may acquire a first photographing parameter in a preview state before the flash is turned on, and may acquire a first image and a second image photographed for the same target object with the first photographing parameter. After the first image and the second image are obtained, the terminal may further divide the first image and the second image into N detection regions respectively according to the same rule, where N is a natural number greater than 1. For example, in the embodiment of the present invention, N may be 90.
Step 310: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 311: and determining the type of each detection area according to the acquired brightness difference.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
Step 312: and copying the image in the detection area with the foreground area type in the first image onto the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area of the foreground area type in the first image onto the first image layer.
Step 313: and copying the image in the detection area with the type of the background area in the first image onto the second image layer.
Specifically, the terminal may establish a second layer, and may copy an image in the detection area of the type of the background area in the first image onto the second layer, after which step 308 may be continued.
In the embodiment of the invention, when the detected illumination intensity is greater than or equal to the preset critical value, the first image and the second image are obtained by the first shooting parameter, and when the detected illumination intensity is less than the preset critical value, the first image and the second image are obtained by the second shooting parameter, so that the obtained first image and the obtained second image are clearer. According to the size relation between the detected illumination intensity and the preset threshold value, the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the first image are respectively copied to the two image layers, or the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the second image are respectively copied to the two image layers, so that the images on the two image layers to be synthesized are ensured to be from the same image, the synthesized image is clearer, and the depth of field effect is better.
A fourth embodiment of the present invention relates to an image processing method. The fourth embodiment is further improved on the basis of the second embodiment, and the main improvement lies in that: in the fourth embodiment of the present invention, after copying the image in the detection area of the background area to the second layer and before combining the first layer and the second layer, the terminal may further perform a blurring process on the image in the second layer, so that the image in the detection area of the foreground area in the combined image is more prominent and the depth of field effect is better. As shown in fig. 4, includes:
step 401: the method comprises the steps of obtaining a first image and a second image which are shot aiming at the same target object, and dividing the first image and the second image into N detection areas respectively according to the same rule. The first image is an image obtained by controlling the camera before the flash lamp is started, the second image is an image obtained by controlling the camera after the flash lamp is started, and N is a natural number greater than 1.
Specifically, the terminal may acquire the shooting parameters in advance in a preview state when the camera is turned on, and control the camera to shoot the same target object according to the acquired shooting parameters, and may control the camera to acquire the first image before the flash is turned on and control the camera to acquire the second image after the flash is turned on. After the first image and the second image are obtained, the terminal can also divide the first image and the second image into N detection areas respectively according to the same preset rule, wherein N is a natural number greater than 1, and the N detection areas in the first image and the second image correspond to each other one by one.
For example, after the terminal controls the camera to capture the first image and the second image for the same target object, the first image and the second image may be equally divided into 90 detection regions having the same size.
Step 402: and acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the terminal may simultaneously traverse N detection regions in the first image and the second image, which correspond to each other one by one, and may acquire the luminance difference between the detection regions in the first image and the second image one by one during the traversal of the detection regions.
Step 403: and determining the type of each detection area according to the acquired brightness difference, wherein the type comprises a foreground area and a background area.
Specifically, the terminal may compare the acquired luminance difference with a preset threshold, and if the acquired luminance difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired luminance difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a background area.
Step 404: and copying the image in the detection area with the type of the foreground area to the first image layer.
Specifically, the terminal may establish a first image layer, and may copy an image in a detection area, which is a foreground area in the first image or the second image, onto the first image layer.
Step 405: and copying the image in the detection area with the type of the background area to the second image layer.
Specifically, the terminal may establish a second image layer, and may copy an image in a detection area, which is a background area in the first image or the second image, onto the second image layer.
Step 406: an information box including an ambiguity level pops up.
Specifically, after copying the image in the detection area, of which the type is the background area, in the first image or the second image onto the second image layer, the terminal may pop up an information frame including a blur level, so that the terminal may perform blur processing on the image on the second image layer.
Step 407: and acquiring the information of the fuzzy grade input by the user, and performing fuzzy processing on the image on the second image layer according to the information of the fuzzy grade input by the user.
Specifically, after popping up the information box including the blur level, the terminal may acquire information of the blur level input by the user, and may perform the blur processing on the image on the second layer according to the information of the blur level input by the user.
Step 408: and synthesizing the first image layer and the second image layer to obtain the image with the synthesized image layers.
Specifically, the terminal may synthesize the first layer and the second layer by covering the first layer on the second layer, so as to obtain a synthesized image.
For example, the terminal may copy an image in a detection area of a foreground area in the first image onto the first layer, copy an image in a detection area of a background area in the second image onto the second layer, perform blurring processing on the image on the second layer according to information of a blurring level input by a user, and then synthesize the first layer and the second layer by overlaying the first layer on the second layer, thereby obtaining a synthesized image.
In the embodiment of the invention, the image in the detection area with the type of the background area on the second layer is subjected to blurring processing according to the acquired information of the blurring level input by the user, so that the image in the detection area with the type of the background area in the image synthesized by the layers is blurred, and the image in the detection area with the type of the foreground area in the image synthesized by the layers is clear, so that the image in the detection area with the type of the foreground area in the image is more prominent, the layers of the foreground area and the background area are more obvious, and the depth of field effect of the image is better.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the steps contain the same logical relationship, which is within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fifth embodiment of the present invention relates to an image processing apparatus including an acquisition module, a calculation module, and a determination module, as shown in fig. 5.
Theimage processing apparatus 500 includes an obtainingmodule 501, a calculatingmodule 502, and a determiningmodule 503.
The obtainingmodule 501 may be configured to obtain a first image and a second image captured by the same target object, and divide the first image and the second image into N detection regions according to the same rule, where the first image is an image obtained by controlling a camera before a flash is turned on, the second image is an image obtained by controlling the camera after the flash is turned on, and N is a natural number greater than 1.
Specifically, the obtainingmodule 501 may obtain the shooting parameters in a preview state when the camera is turned on in advance, control the camera to shoot the same target object according to the obtained shooting parameters, and control the camera to obtain the first image before the flash is turned on and control the camera to obtain the second image after the flash is turned on. After the first image and the second image are obtained, the obtainingmodule 501 may further divide the first image and the second image into N detection regions according to the same preset rule, where N is a natural number greater than 1, and the N detection regions in the first image and the second image correspond to each other one by one.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection area according to the acquired brightness difference, where the type includes a foreground area and a background area.
Specifically, the determiningmodule 503 may compare the acquired brightness difference with a preset threshold, and if the acquired brightness difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired brightness difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a foreground area.
Compared with the prior art, the method and the device for detecting the target object can control the camera to acquire the first image before the flash lamp is started and control the camera to acquire the second image after the flash lamp is started aiming at the same target object, and can divide the first image and the second image into N detection areas respectively according to the same rule. Furthermore, by acquiring the brightness difference of the corresponding detection areas in the first image and the second image one by one, the type of each detection area can be determined. In this embodiment, the type of each detection region can be directly determined based on the luminance difference between the corresponding detection regions in the first image and the second image, and the foreground and background regions in the first image and the second image can be automatically identified. Meanwhile, the terminal can directly and automatically identify the foreground and background areas in the image without additionally arranging two cameras, so that the hardware cost can be reduced.
It should be understood that this embodiment is an example of the apparatus corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
A sixth embodiment of the present invention relates to an image processing apparatus. The sixth embodiment is a further improvement on the fifth embodiment, and the main improvement lies in that: in the sixth embodiment, the image processing apparatus further includes an image copying module and an image synthesizing module, as shown in fig. 6.
Theimage processing apparatus 500 includes anacquisition module 501, acalculation module 502, adetermination module 503, animage copying module 504, and animage synthesis module 505.
The obtainingmodule 501 may be configured to obtain a first image and a second image captured by the same target object, and divide the first image and the second image into N detection regions according to the same rule, where the first image is an image obtained by controlling a camera before a flash is turned on, the second image is an image obtained by controlling the camera after the flash is turned on, and N is a natural number greater than 1.
Specifically, the obtainingmodule 501 may obtain the shooting parameters in a preview state when the camera is turned on in advance, control the camera to shoot the same target object according to the obtained shooting parameters, and control the camera to obtain the first image before the flash is turned on and control the camera to obtain the second image after the flash is turned on. After the first image and the second image are obtained, the obtainingmodule 501 may further divide the first image and the second image into N detection regions according to the same preset rule, where N is a natural number greater than 1, and the N detection regions in the first image and the second image correspond to each other one by one.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection region according to the acquired brightness difference, where the type includes a foreground region and a background region, and the determiningmodule 503 further includes a threshold comparing sub-module 5031 and atype detecting sub-module 5032.
The threshold comparison sub-module 5031 can be configured to compare the brightness difference of the corresponding detection regions in the first image and the second image obtained by thecalculation module 502 with a preset threshold.
Thetype detection sub-module 5032 may be configured to determine the type of the detection region as a background region when the comparison result of the threshold comparison sub-module 5031 is that the brightness difference is smaller than a preset threshold, and determine the type of the detection region as a foreground region when the comparison result of the threshold comparison sub-module is that the brightness difference is greater than or equal to the preset threshold.
Theimage copying module 504 may be configured to copy an image in the detection area with the type of foreground area onto the first image layer, and copy an image in the detection area with the type of background area onto the second image layer.
Specifically, theimage copying module 504 may be specifically configured to copy an image in the detection area, which is of the foreground area in the first image or the second image, onto the first image layer, and copy an image in the detection area, which is of the background area in the first image or the second image, onto the second image layer.
Theimage synthesis module 505 may be configured to synthesize the first image layer and the second image layer to obtain an image after the image layers are synthesized.
Specifically, theimage synthesis module 505 may synthesize the first image layer and the second image layer by overlaying the first image layer on the second image layer, so as to obtain a synthesized image.
In the embodiment of the invention, the images in the detection areas with the types of the foreground area and the background area are respectively copied to the two image layers, and the two image layers are synthesized to obtain the synthesized image. In this way, the two image layers are superposed and synthesized, so that the image in the detection area with the foreground type and the image in the detection area with the background type in the synthesized image have different visual distances, and the image with the depth of field effect can be obtained.
It should be understood that this embodiment is an example of the apparatus corresponding to the second embodiment, and that this embodiment can be implemented in cooperation with the second embodiment. The related technical details mentioned in the second embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the second embodiment.
A seventh embodiment of the present invention relates to an image processing apparatus. The seventh embodiment is further optimized on the basis of the fifth embodiment, and the main optimization is as follows: in the seventh embodiment of the present invention, the image processing apparatus further includes a detection module and a threshold comparison module, as shown in fig. 7 and 8.
Theimage processing apparatus 500 includes anacquisition module 501, acalculation module 502, adetermination module 503, animage copying module 504, animage synthesis module 505, adetection module 506, and a thresholdvalue comparison module 507.
Thedetection module 506 may be used to detect the illumination intensity.
Specifically, the detectingmodule 506 may detect the current illumination intensity in advance, and may acquire the illumination intensity while saving the acquired illumination intensity.
Thethreshold comparison module 507 may be configured to compare the illumination intensity detected by thedetection module 506 with a preset threshold.
Theacquisition module 501 includes afirst shooting submodule 5011 and asecond shooting submodule 5012.
The first photographing sub-module 5011 may be configured to acquire a first image and a second image photographed for a same target object by using a first photographing parameter when the comparison result of the criticalvalue comparing module 507 is that the detected illumination intensity is greater than or equal to a preset critical value, and may divide the first image and the second image into N detection regions respectively by using a same rule, where N is a natural number greater than 1, and the first photographing parameter is a photographing parameter acquired in a preview state before the flash is turned on.
The second photographing sub-module 5012 may be configured to obtain a first image and a second image photographed for the same target object by using a second photographing parameter when the comparison result of the criticalvalue comparing module 507 is that the detected illumination intensity is smaller than a preset critical value, and divide the first image and the second image into N detection regions by using the same rule, where N is a natural number greater than 1, and the second photographing parameter is a photographing parameter obtained in a preview state when a flash is pre-flashed.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection area according to the acquired brightness difference, where the type includes a foreground area and a background area.
Specifically, the determiningmodule 503 may compare the acquired brightness difference with a preset threshold, and if the acquired brightness difference is smaller than the preset threshold, may determine that the type of the detection area is a foreground area, and if the acquired brightness difference is greater than or equal to the preset threshold, may determine that the type of the detection area is a foreground area.
Theimage replication module 504 includes a foreground regionimage replication sub-module 5041 and a background regionimage replication sub-module 5042.
The foreground regionimage replication module 5041 may be configured to copy, when the comparison result of the criticalvalue comparison module 507 is that the detected illumination intensity is greater than or equal to a preset critical value, an image in the detection region of the foreground region in the first image to the first image layer, and when the comparison result of the criticalvalue comparison module 507 is that the detected illumination intensity is less than the preset critical value, copy, to the first image layer, an image in the detection region of the foreground region in the second image.
The background region image copying sub-module 5042 may be configured to copy, when the comparison result of the thresholdvalue comparing module 507 is that the detected illumination intensity is greater than or equal to a preset threshold value, an image in the detection region of the first image that is of the background region to the second image layer, and when the comparison result of the thresholdvalue comparing module 507 is that the detected illumination intensity is less than the preset threshold value, copy, to the second image layer, an image in the detection region of the second image that is of the background region.
Theimage synthesis module 505 may be configured to synthesize the first image layer and the second image layer to obtain an image after the image layers are synthesized.
Specifically, theimage synthesis module 505 may synthesize the first image layer and the second image layer by overlaying the first image layer on the second image layer, so as to obtain a synthesized image.
In the embodiment of the invention, when the detected illumination intensity is greater than or equal to the preset critical value, the first image and the second image are obtained by the first shooting parameter, and when the detected illumination intensity is less than the preset critical value, the first image and the second image are obtained by the second shooting parameter, so that the obtained first image and the obtained second image are clearer. According to the size relation between the detected illumination intensity and the preset threshold value, the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the first image are respectively copied to the two image layers, or the image in the detection area with the type of the foreground area and the image in the detection area with the type of the background area in the second image are respectively copied to the two image layers, so that the images on the two image layers to be synthesized are ensured to be from the same image, the synthesized image is clearer, and the depth of field effect is better.
It should be understood that this embodiment is an example of an apparatus corresponding to the third embodiment, and that this embodiment can be implemented in cooperation with the third embodiment. The related technical details mentioned in the third embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the third embodiment.
An eighth embodiment of the present invention relates to an image processing apparatus. The eighth embodiment is a further improvement on the sixth embodiment, and the main improvement is that: in the eighth embodiment of the present invention, the image processing apparatus further includes an information frame presenting module and an image blurring processing module, as shown in fig. 9.
Theimage processing apparatus 500 includes anacquisition module 501, acalculation module 502, adetermination module 503, animage copying module 504, animage synthesis module 505, an informationframe presentation module 508, and an imageblur processing module 509.
The obtainingmodule 501 may be configured to obtain a first image and a second image captured by the same target object, and divide the first image and the second image into N detection regions according to the same rule, where the first image is an image obtained by controlling a camera before a flash is turned on, the second image is an image obtained by controlling the camera after the flash is turned on, and N is a natural number greater than 1.
Specifically, the obtainingmodule 501 may obtain the shooting parameters in a preview state when the camera is turned on in advance, control the camera to shoot the same target object according to the obtained shooting parameters, and control the camera to obtain the first image before the flash is turned on and control the camera to obtain the second image after the flash is turned on. After the first image and the second image are obtained, the obtainingmodule 501 may further divide the first image and the second image into N detection regions according to the same preset rule, where N is a natural number greater than 1, and the N detection regions in the first image and the second image correspond to each other one by one.
The calculatingmodule 502 may be configured to obtain the brightness difference of the corresponding detection areas in the first image and the second image one by one.
Specifically, the calculatingmodule 502 may simultaneously traverse N detection regions in the first image and the second image, and may acquire the brightness difference between the corresponding detection regions in the first image and the second image one by one during the process of traversing the detection regions.
The determiningmodule 503 may be configured to determine the type of each detection area according to the acquired brightness difference, where the type includes a foreground area and a background area.
Specifically, the determiningmodule 503 may compare the acquired brightness difference with a preset threshold, if the acquired brightness difference is smaller than the preset threshold, the determiningmodule 503 may determine that the type of the detection area is a foreground area, and if the acquired brightness difference is greater than or equal to the preset threshold, the determiningmodule 503 may determine that the type of the detection area is a foreground area.
Theimage copying module 504 may be configured to copy an image in the detection area with the type of foreground area onto the first image layer, and copy an image in the detection area with the type of background area onto the second image layer. Theimage copying module 504 may be further configured to trigger the informationframe prompting module 508 after copying the image in the detection area of the background area to the second layer.
The information boxprompt module 508 may be used to pop up an information box including a blur level after being triggered by theimage copy module 504.
Theimage blurring module 509 may be configured to obtain information of a blurring level input by a user, and perform blurring processing on an image on the second layer according to the information of the blurring level input by the user.
Specifically, after the information boxprompt module 508 pops up an information box including a blur level, the imageblur processing module 509 may acquire information of the blur level input by the user, and may perform blur processing on the image on the second layer according to the information of the blur level input by the user.
Theimage synthesis module 505 may be configured to synthesize the first image layer and the second image layer to obtain an image after the image layers are synthesized.
Specifically, theimage synthesis module 505 may synthesize the first image layer and the second image layer by overlaying the first image layer on the second image layer, so as to obtain a synthesized image.
In the embodiment of the invention, the image in the detection area with the type of the background area on the second layer is subjected to blurring processing according to the acquired information of the blurring level input by the user, so that the image in the detection area with the type of the background area in the image synthesized by the layers is blurred, and the image in the detection area with the type of the foreground area in the image synthesized by the layers is clear, so that the image in the detection area with the type of the foreground area in the image is more prominent, the layers of the foreground area and the background area are more obvious, and the depth of field effect of the image is better.
It should be understood that this embodiment is an example of an apparatus corresponding to the fourth embodiment, and that this embodiment can be implemented in cooperation with the fourth embodiment. The related technical details mentioned in the fourth embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the fourth embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (6)

CN201710392619.5A2017-05-272017-05-27Image processing method and deviceActiveCN107392087B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710392619.5ACN107392087B (en)2017-05-272017-05-27Image processing method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710392619.5ACN107392087B (en)2017-05-272017-05-27Image processing method and device

Publications (2)

Publication NumberPublication Date
CN107392087A CN107392087A (en)2017-11-24
CN107392087Btrue CN107392087B (en)2020-11-13

Family

ID=60338446

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710392619.5AActiveCN107392087B (en)2017-05-272017-05-27Image processing method and device

Country Status (1)

CountryLink
CN (1)CN107392087B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108616687B (en)*2018-03-232020-07-21维沃移动通信有限公司 A photographing method, device and mobile terminal
CN108881730A (en)*2018-08-062018-11-23成都西纬科技有限公司Image interfusion method, device, electronic equipment and computer readable storage medium
CN111091066B (en)*2019-11-252023-09-22重庆工程职业技术学院 A method and system for ground condition assessment of autonomous vehicles
CN115631113B (en)*2022-12-062023-03-17深圳市蔚来芯科技有限公司Image correction method and system
CN116600150B (en)*2023-05-292024-02-06佛山市炫新智能科技有限公司Matrix type live broadcast display system and display method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103116885A (en)*2013-01-242013-05-22天津大学Foreground area extraction method using flash lamp image
CN103366352A (en)*2012-03-302013-10-23北京三星通信技术研究有限公司Device and method for producing image with background being blurred
CN104463775A (en)*2014-10-312015-03-25小米科技有限责任公司Device and method for achieving depth-of-field effect of image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7680342B2 (en)*2004-08-162010-03-16Fotonation Vision LimitedIndoor/outdoor classification in digital images
US7724952B2 (en)*2006-05-152010-05-25Microsoft CorporationObject matting using flash and no-flash images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103366352A (en)*2012-03-302013-10-23北京三星通信技术研究有限公司Device and method for producing image with background being blurred
CN103116885A (en)*2013-01-242013-05-22天津大学Foreground area extraction method using flash lamp image
CN104463775A (en)*2014-10-312015-03-25小米科技有限责任公司Device and method for achieving depth-of-field effect of image

Also Published As

Publication numberPublication date
CN107392087A (en)2017-11-24

Similar Documents

PublicationPublication DateTitle
CN107392087B (en)Image processing method and device
CN107977940B (en)Background blurring processing method, device and equipment
EP3457683B1 (en)Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN107087107B (en)Image processing apparatus and method based on dual camera
EP1839435B1 (en)Digital image acquisition system with portrait mode
CN104349066B (en)A kind of method, apparatus for generating high dynamic range images
CN107241559B (en)Portrait photographing method and device and camera equipment
US9591237B2 (en)Automated generation of panning shots
US7595823B2 (en)Providing optimized digital images
JP2015180062A (en) Video sequence processing method and video sequence processing apparatus
US20090041377A1 (en)Method and system for defect image correction
CN108364275B (en)Image fusion method and device, electronic equipment and medium
CN110971841A (en)Image processing method, image processing device, storage medium and electronic equipment
US10972676B2 (en)Image processing method and electronic device capable of optimizing hdr image by using depth information
WO2016011861A1 (en)Method and photographing terminal for photographing object motion trajectory
US10965877B2 (en)Image generating method and electronic apparatus
CN107948619B (en) Image processing method, apparatus, computer-readable storage medium, and mobile terminal
KR20110067700A (en) Image Acquisition Method and Digital Camera System
WO2016074414A1 (en)Image processing method and apparatus
US8031966B2 (en)Method and apparatus for determining whether backlight exists
JP6541416B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND STORAGE MEDIUM
CN110545368B (en)Self-photographing method and device and mobile terminal
CN112543286A (en)Image generation method and device for terminal, storage medium and terminal
US20170004608A1 (en)Image processing method and electronic apparatus with image processing mechanism
CN115209051B (en)Focusing method and device of zoom camera

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information

Address after:201203 Shanghai city Pudong New Area Keyuan Road No. 399 Building 1

Applicant after:Huaqin Technology Co., Ltd

Address before:201203 Shanghai City, Pudong New Area Zhangjiang hi tech Park Keyuan Road No. 399 Building 1

Applicant before:HUAQIN TELECOM TECHNOLOGY Co.,Ltd.

CB02Change of applicant information
GR01Patent grant
GR01Patent grant
CP03Change of name, title or address

Address after:Building 1, 399 Keyuan Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee after:Huaqin Technology Co.,Ltd.

Address before:Building 1, No. 399 Keyuan Road, Pudong New Area, Shanghai, 201203

Patentee before:Huaqin Technology Co., Ltd

CP03Change of name, title or address

[8]ページ先頭

©2009-2025 Movatter.jp