Movatterモバイル変換


[0]ホーム

URL:


CN107945105B - Background blur processing method, device and equipment - Google Patents

Background blur processing method, device and equipment
Download PDF

Info

Publication number
CN107945105B
CN107945105BCN201711242468.1ACN201711242468ACN107945105BCN 107945105 BCN107945105 BCN 107945105BCN 201711242468 ACN201711242468 ACN 201711242468ACN 107945105 BCN107945105 BCN 107945105B
Authority
CN
China
Prior art keywords
depth
area
background
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711242468.1A
Other languages
Chinese (zh)
Other versions
CN107945105A (en
Inventor
欧阳丹
谭国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp LtdfiledCriticalGuangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711242468.1ApriorityCriticalpatent/CN107945105B/en
Publication of CN107945105ApublicationCriticalpatent/CN107945105A/en
Application grantedgrantedCritical
Publication of CN107945105BpublicationCriticalpatent/CN107945105B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种背景虚化处理方法、装置及设备,其中,方法包括:获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像;检测主图像中是否存在预设的目标对象;若检测获知存在目标对象,则确定与目标对象对应的目标区域;根据主图像和副图像,应用预设的第一景深算法计算目标区域的第一景深信息;应用预设的第二景深算法获取非目标区域的第二景深信息,其中,第一景深算法的计算精度高于第二景深算法;根据第一景深信息对目标区域的背景区域进行虚化处理,以及根据第二景深信息对非目标区域的背景区域进行虚化处理。由此,实现了虚化处理时保护目标对象不被虚化,提高了图像处理的视觉效果。

Figure 201711242468

The present application discloses a background blur processing method, device and device, wherein the method includes: acquiring a main image captured by a main camera and a sub-image captured by a sub-camera; detecting whether a preset target object exists in the main image; Knowing that there is a target object, determine the target area corresponding to the target object; apply the preset first depth of field algorithm to calculate the first depth of field information of the target area according to the main image and the sub-image; apply the preset second depth of field algorithm to obtain non-targets The second depth of field information of the area, wherein the calculation accuracy of the first depth of field algorithm is higher than that of the second depth of field algorithm; the background area of the target area is blurred according to the first depth of field information, and the non-target area is processed according to the second depth of field information. The background area is blurred. In this way, the target object is protected from being blurred during the blurring process, and the visual effect of the image processing is improved.

Figure 201711242468

Description

Background blurring processing method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a background blurring process, device, and apparatus.
Background
Generally, in order to highlight the subject of photographing, blurring processing is performed on the background region of photographing, however, when some images are blurred due to the limitation of the processing capability of the processor, the images of the subject of photographing may be blurred, for example, when the user takes a scissor-hand posture to photograph, the images corresponding to the scissor-hand may be blurred, and the blurring effect is poor.
Content of application
The application provides a background blurring processing method, a background blurring processing device and background blurring processing equipment, which are used for solving the technical problem that in the prior art, when some images are blurred due to the fact that terminal equipment is limited by the processing capacity of a processor, the images of a shooting subject are probably blurred.
The embodiment of the application provides a background blurring processing method, which comprises the following steps: acquiring a main image shot by a main camera and an auxiliary image shot by an auxiliary camera; detecting whether a preset target object exists in the main image or not; if the target object is detected and known to exist, determining a target area corresponding to the target object in the main image; according to the main image and the auxiliary image, calculating first depth of field information of the target area by applying a preset first depth of field algorithm; acquiring second depth-of-field information of a non-target area in the main image by applying a preset second depth-of-field algorithm; blurring a background area of the target area according to the first depth of field information; and blurring the background area of the non-target area according to the second depth information.
Another embodiment of the present application provides a background blurring processing apparatus, including: the first acquisition module is used for acquiring a main image shot by the main camera and an auxiliary image shot by the auxiliary camera; the detection module is used for detecting whether a preset target object exists in the main image or not; the determining module is used for determining a target area corresponding to the target object in the main image when the target object is detected and known to exist; the second acquisition module is used for calculating first depth of field information of the target area by applying a preset first depth of field algorithm according to the main image and the auxiliary image, and acquiring second depth of field information of a non-target area by applying a preset second depth of field algorithm; and the processing module is used for blurring the background area of the target area according to the first depth of field information and blurring the background area of the non-target area according to the second depth of field information.
Yet another embodiment of the present application provides a computer device, which includes a memory and a processor, wherein the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the background blurring processing method described in the above embodiments of the present application.
Yet another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the background blurring processing method according to the foregoing embodiment of the present application.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of obtaining a main image shot by a main camera and a secondary image shot by a secondary camera, detecting whether a preset target object exists in the main image, if the target object is detected, determining a target area corresponding to the target object, calculating first depth of field information of the target area by applying a preset first depth of field algorithm according to the main image and the secondary image, obtaining second depth of field information of a non-target area by applying a preset second depth of field algorithm, blurring a background area of the target area according to the first depth of field information, and blurring the background area of the non-target area according to the second depth of field information. Therefore, the target object is protected from being blurred during blurring processing, and the visual effect of image processing is improved.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a background blurring processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of triangulation according to one embodiment of the present application;
FIG. 3 is a schematic view of a dual-camera depth of field acquisition in accordance with one embodiment of the present application;
FIG. 4 is a flow diagram of a background blurring processing method according to another embodiment of the present application;
FIG. 5 is a flow diagram of a background blurring processing method according to yet another embodiment of the present application;
FIG. 6 is a flow diagram of a method of background blurring according to an embodiment of the present application;
FIG. 7(a) is a diagram illustrating the effect of background blurring according to the prior art;
FIG. 7(b) is a schematic diagram of the effect of background blurring processing according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a background blurring processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a background blurring processing apparatus according to another embodiment of the present application;
FIG. 10 is a schematic diagram of a background blurring processing apparatus according to another embodiment of the present application; and
FIG. 11 is a schematic diagram of an image processing circuit according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The background blurring processing method, apparatus, and device according to the embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a flowchart of a background blurring processing method according to an embodiment of the present application, as shown in fig. 1, the method includes:
step 101, acquiring a main image shot by a main camera and a sub-image shot by a sub-camera.
Step 102, detecting whether a preset target object exists in the main image.
Specifically, the dual-camera system calculates the depth of field information through the main image and the sub-image, wherein the dual-camera system includes a main camera for acquiring the main image of the subject to be photographed, and a sub-camera for assisting the main image to acquire the depth of field information, and the main camera and the sub-camera may be arranged along a horizontal direction, or may be arranged along a vertical direction, for more clearly describing how the dual-camera acquires the depth of field information, the following explains a principle that the dual-camera acquires the depth of field information with reference to the accompanying drawings:
in practical application, the information of depth of field resolved by human eyes mainly depends on binocular vision, which is the same as the principle of depth of field resolved by two cameras, and is realized mainly by the principle of triangulation distance measurement as shown in fig. 2, based on fig. 2, in the actual space, the imaging object is drawn, and the positions O of the two cameras are shownRAnd OTAnd focal planes of the two cameras, wherein the distance between the focal planes and the plane where the two cameras are located is f, and the two cameras perform imaging at the focal planes, so that two shot images are obtained.
Where P and P' are the positions of the same subject in different captured images, respectively. Wherein the distance from the P point to the left boundary of the shot image is XRDistance of P' pointThe distance of the left boundary of the photographed image is XT。ORAnd OTThe two cameras are respectively arranged on the same plane, and the distance is B.
Based on the principle of triangulation, the distance Z between the object and the plane where the two cameras are located in fig. 2 has the following relationship:
Figure GDA0002898615220000031
based on this, can be derived
Figure GDA0002898615220000032
Where d is a distance difference between positions of the same object in different captured images. B, f is constant, so the distance Z of the object can be determined from d.
Of course, in addition to the triangulation method, other methods may also be used to calculate the depth of field information of the main image, for example, when the main camera and the sub-camera take a picture of the same scene, the distance between an object in the scene and the sub-camera is proportional to the displacement difference, the attitude difference, and the like of the images formed by the main camera and the sub-camera, and therefore, in an embodiment of the present application, the distance Z may be obtained according to the proportional relationship.
For example, as shown in fig. 3, a map of differences between the main image captured by the main camera and the sub image captured by the sub camera is calculated, and this map is represented by a disparity map, which represents the difference in displacement between the same points on the two maps, but since the difference in displacement in triangulation is proportional to Z, the disparity map is often used as the depth information map as it is.
When blurring a background region of an image, a bi-camera system may cause blurring of an image of some target objects that are not desired to be blurred, and thus, in order to ensure that the target objects that are not desired to be blurred by a user are not blurred, it is detected whether a target object exists in a main image, where the target object may include a specific gesture motion (such as a scissor hand, a fuel filling gesture, and the like), may include a famous building (such as a long city of ten miles, a yellow mountain, and the like), or may include some specific shaped object (such as a circular object, a triangular object, and the like).
It should be understood that, depending on the application scenario, the detection of whether the preset target object exists in the main image may be implemented in different manners, for example as follows:
as an example:
in this example, template information including a contour edge of a target object is preset, a contour edge of a scene shot in a foreground region of a main image is detected, preset template information is matched with the contour edge, and if matching is successful, a preset target object is detected and known to exist in the main image.
In this example, the contour edge in the preset template information may include coordinate values of the contour edge of the target object, a position relationship between each pixel point, and the like.
It can be understood that, in this example, the detection efficiency is improved by only recognizing whether the target object exists in the foreground region by shooting the contour edge of the scene, so as to further improve the image processing efficiency.
As another example:
in this example, template information including shape information of a target object is preset, the shape information of the target object includes external contour information and internal filling pattern information of the target object, shape information of a shot scene in a foreground region of a main image is detected, the preset template information is matched with the shape information, and if the matching is successful, the preset target object in the main image is detected and known.
It can be understood that, in this example, whether the target object exists in the foreground region is identified through the shape information of the shooting scene, so that misjudgment of some shooting subject shapes with similar external profiles is avoided, and the accuracy of identification is improved.
In some examples, the two examples can be combined, and the contour edge is identified first, and then the shape information is identified, so as to improve the identification accuracy.
Step 103, if the target object is detected and known to exist, determining a target area corresponding to the target object in the main image.
And 104, calculating first depth of field information of the target area by applying a preset first depth of field algorithm according to the main image and the auxiliary image.
And 105, acquiring second depth of field information of the non-target area in the main image by applying a preset second depth of field algorithm.
Specifically, if the target object is detected and known to exist, in order to avoid blurring of the target object, a target area corresponding to the target object is determined, a preset first depth of field algorithm is applied to calculate first depth of field information of the target area according to the main image and the secondary image, and a preset second depth of field algorithm is applied to acquire second depth of field information of the non-target area, wherein the calculation precision of the first depth of field algorithm is higher than that of the second depth of field algorithm, so on one hand, the algorithm for calculating the depth of field of the background area corresponding to the non-target object is the second depth of field algorithm with relatively lower calculation precision, therefore, the calculation amount is smaller than that of the first depth of field algorithm, the operation pressure of the terminal device can be reduced, the phenomenon that blurring processing time is longer, which results in increased consumed time for image processing is avoided, and on the other hand, the algorithm for calculating the depth of the target area corresponding to the target object, therefore, the target object area is guaranteed not to be blurred, and the first depth of field algorithm is only adopted for the target area corresponding to the target object, so that the influence on the operation pressure of a processor of the terminal equipment is small, and the image processing time cannot be obviously increased.
Of course, in the specific implementation process, in order to meet the personalized requirements of the user and achieve an interesting image processing effect, the calculation accuracy of the first depth-of-field algorithm may also be equal to or lower than that of the second depth-of-field algorithm, which is not limited herein.
In an embodiment of the present application, after detecting that the preset target object does not exist in the main image, if the preset target object is not detected, the second depth of field algorithm is applied to calculate third depth of field information of the main image, and the background area of the main image is subjected to blurring processing according to the third depth of field information, so as to reduce the processing pressure of the system.
And 106, blurring the background area of the target area according to the first depth of field information.
Andstep 107, blurring the background area of the non-target area according to the second depth information.
Specifically, after the depth information of the target area and the depth information of the target area are calculated according to different calculation accuracies, the background area of the target area is subjected to blurring processing according to the first depth information, the background area of the non-target area is subjected to blurring processing according to the second depth information, and the target object in the image subjected to blurring processing is protected.
Specifically, in practical applications, different manners may be adopted according to different application scenes to implement blurring of a background area of a target area according to first depth information, and blurring of a background area of the non-target area according to second depth information, which is described as follows:
the first example:
as shown in fig. 4, blurring the background region of the target region according to the first depth information instep 103 may include:
step 201, determining first foreground region depth information and first background region depth information of the target region according to the first depth information and the focusing region of the main image.
It can be understood that the target area may include a foreground area where the target object is located and a background area other than the target object, and thus, in order to further process the area where the target object is located, first foreground area depth information and first background area depth information of the target area are determined according to the first depth information and a focused area of the main image, where a range of clear imaging in the target area before the focused area is the first foreground area and a range of clear imaging in the target area after the focused area is the first background area.
It should be noted that, according to different application scenarios, the manner of separating the first foreground depth from the first background region for the target region is different, which is exemplified as follows:
the first example:
the shooting related parameters can be acquired so as to calculate the depth of field information of the image area outside the focus area in the target area according to the formula of the shooting camera.
In this example, parameters of the photographing camera such as an allowable circle of confusion diameter, an aperture value, a focal length, a focal distance, and the like can be acquired, so that according to the formula: the first foreground region depth information (aperture value, square of permissible diffusion circle diameter, focal distance)/(focal length square + aperture value, permissible diffusion circle diameter, focal distance) is calculated as a first foreground region from which a foreground has been separated, and the first background region depth information (aperture value, square of permissible diffusion circle diameter, focal distance, square of aperture value, permissible diffusion circle diameter, focal distance) is calculated as a first background region depth information of the background of the target region from the formula.
The second example is:
determining a depth of field map of an image area outside a focus area according to depth of field data information of a current target area respectively acquired by two cameras, and determining a first foreground area before the focus area and a first background area after the focus area according to the depth of field map.
Specifically, in this example, since the two cameras are not located at the same position, the two rear cameras have a certain angle difference and distance difference with respect to the target object to be photographed, and thus the preview image data acquired by the two cameras have a certain phase difference.
For example, for point a on the imaging target object, in the preview image data of the camera 1, the coordinates of the pixel point corresponding to point a are (30, 50), while in the preview image data of the camera 2, the coordinates of the pixel point corresponding to point a are (30, 48), and the phase difference between the pixel points corresponding to point a in the two preview image data is 50-48, which is 2.
In this example, the relationship between the depth of field information and the phase difference may be established in advance according to experimental data or camera parameters, and then, the corresponding depth of field information may be searched for according to the phase difference of each pixel point in the target image in the preview image data acquired by the two cameras.
For example, for the phase difference 2 corresponding to the point a, if the corresponding depth of field is found to be 5 meters according to the preset corresponding relationship, the depth of field information corresponding to the point a in the target area is 5 meters. Therefore, the depth of field information of each pixel point in the current target area can be obtained, namely, a depth of field map of the image area outside the focus area is obtained.
Furthermore, after obtaining the depth map of the image area outside the focal area, the first foreground area depth information of the image area before the focal area and the first background area depth information after the focal area can be further determined.
Step 202, obtaining a basic value of the first blurring degree according to the depth of field information of the first foreground area and the depth of field information of the first background area.
The basic value of the first blurring degree may specify a degree level of blurring, such as strong, weak, and the like, where a larger difference between the first foreground region depth information and the first background region depth information indicates that the foreground and the background in the target region are more clearly distinguished, and thus the blurring degree may be smaller, so the basic value of the first blurring degree is smaller, and conversely, a smaller difference between the first foreground region depth information and the first background region depth information indicates that the foreground and the background in the target region are less clearly distinguished, and thus the blurring degree may be larger, so the basic value of the first blurring degree is larger.
Step 203, determining a blurring coefficient of each pixel in the background area of the target area according to the basic value of the first blurring degree and the depth information of the first background area.
In the embodiment of the present application, the blurring coefficient of each pixel in the background region of the target region is determined according to the basic value of the first blurring degree and the depth information of the first background region.
And step 204, performing Gaussian blur processing on the background area of the target area according to the blurring coefficient of each pixel.
Specifically, the gaussian blur processing is performed on the background area of the target area according to the blurring coefficient of each pixel, so that the larger the depth of field information of the background area in the target area is, the higher the depth of field information of the background area is, and the larger the blurring degree is.
Further, as shown in fig. 5, the blurring the background area of the non-target area according to the second depth information instep 103 includes:
step 301, determining second foreground region depth information and second background region depth information of the non-target region according to the second depth information and the focused region of the main image.
It is to be understood that the non-target area may include a foreground area and a background area, and thus, in order to further facilitate processing of the background area of the image, the second foreground area depth information and the second background area depth information of the non-target area are determined according to the second depth information and the focused area of the main image, where a manner of determining the second foreground area depth information and the second background area depth information of the non-target area according to the second depth information and the focused area of the main image is similar to a manner of determining the first foreground area depth information and the first background area depth information of the target area according to the first depth information and the focused area of the main image, and will not be described herein again.
Step 302, obtaining a base value of the second blurring degree according to the second foreground region depth information and the second background region depth information.
The base value of the second blurring degree may specify the blurring degree, where a larger difference between the depth of field information of the second foreground region and the depth of field information of the second background region indicates that the foreground and the background in the non-target region are more clearly distinguished, and thus, the blurring degree may be smaller, so the base value of the second blurring degree is smaller, whereas a smaller difference between the depth of field information of the second foreground region and the depth of field information of the second background region indicates that the foreground and the background in the non-target region are less clearly distinguished, and thus, the blurring degree may be larger, so the base value of the second blurring degree is larger.
And 303, performing Gaussian blur processing on the background area of the non-target area according to the basic value of the second blurring degree.
Specifically, the gaussian blur processing is performed on the background area of the non-target area according to the base value of the second blurring degree, so that the larger the depth of field information of the background area in the non-target area is, the higher the depth of field information of the background area is, and the larger the blurring degree is.
In order to make the implementation process and the processing effect of the background blurring process of the present application more clearly understood by those skilled in the art, the following example is taken in conjunction with a specific application scenario:
specifically, as shown in fig. 6, when the preset target object is a preset gesture, after the main image is acquired, it is detected whether a preset gesture image exists in the main image, if so, the target area where the preset gesture image exists is refined by using the background blurring processing method described in the above embodiment, a depth of field algorithm with higher precision is used for background blurring processing than a default depth of field algorithm of the system in advance, and the depth of field algorithm with lower precision set by the system can be used for calculating normal background blurring processing in other areas, so that blurring effects of some specific scenes can be improved, and meanwhile, too much processing time cannot be increased.
Continuing with the above scenario as an example, as shown in fig. 7(a), after performing background blurring by using a background blurring processing method in the prior art, due to the limitation of the calculation accuracy of the depth-of-field information of the terminal device, an image area corresponding to a preset finger may be blurred, which results in a poor blurring effect, and after using the background blurring processing method of the present application, as shown in fig. 7(b), a target area where a gesture image is located is subjected to refined background blurring processing, which makes the hand gesture prominent and not blurred, and the image blurring effect better.
To sum up, the background blurring processing method according to the embodiment of the present application obtains a main image captured by a main camera and a sub-image captured by a sub-camera, detects whether a preset target object exists in the main image, determines a target area corresponding to the target object if it is detected that the target object exists, calculates first depth-of-field information of the target area by using a preset first depth-of-field algorithm according to the main image and the sub-image, obtains second depth-of-field information of a non-target area by using a preset second depth-of-field algorithm, and further performs blurring processing on a background area of the target area according to the first depth-of-field information and performs blurring processing on a background area of the non-target area according to the second depth-of-field information. Therefore, the target object is protected from being blurred during blurring processing, and the visual effect of image processing is improved.
In order to implement the foregoing embodiments, the present application further provides a background blurring processing apparatus, and fig. 8 is a schematic structural diagram of the background blurring processing apparatus according to an embodiment of the present application, as shown in fig. 8, the background blurring processing apparatus includes a first obtainingmodule 100, a detectingmodule 200, a determiningmodule 300, a second obtainingmodule 400, and aprocessing module 500.
The first acquiringmodule 100 is configured to acquire a main image captured by a main camera and a sub-image captured by a sub-camera.
The detectingmodule 200 is configured to detect whether a preset target object exists in the main image.
In one embodiment of the present application, as shown in fig. 9, thedetection module 200 includes adetection unit 210 and anacquisition unit 220.
Wherein the detectingunit 210 is configured to detect a contour edge of a captured scene in a foreground region of the main image.
Alearning unit 220, configured to match preset template information with the contour edge, and if the matching is successful, detect and learn that a preset target object exists in the main image.
The determiningmodule 300 is configured to determine a target area corresponding to the target object in the main image when the presence of the target object is detected and known.
In one embodiment of the present application, as shown in fig. 10, the determiningmodule 300 includes a first determiningunit 310, an obtainingunit 320, a second determiningunit 330, and aprocessing unit 340, wherein,
a first determiningunit 310, configured to determine first foreground area depth information and first background area depth information of the target area according to the first depth information and the focused area of the main image.
The obtainingunit 320 is configured to obtain a base value of the first blurring degree according to the first foreground region depth information and the first background region depth information.
The second determiningunit 330 is configured to determine a blurring coefficient of each pixel in the background region of the target region according to the basic value of the first blurring degree and the depth information of the first background region.
And theprocessing unit 340 is configured to perform gaussian blurring processing on the background area of the target area according to the blurring coefficient of each pixel.
The second obtainingmodule 400 is configured to calculate first depth-of-field information of the target area by applying a preset first depth-of-field algorithm according to the main image and the secondary image, and obtain second depth-of-field information of the non-target area by applying a preset second depth-of-field algorithm.
In one embodiment of the present application, the first depth of view algorithm is more computationally accurate than the second depth of view algorithm.
Theprocessing module 500 is configured to perform blurring on the background area of the target area according to the first depth-of-field information, and perform blurring on the background area of the non-target area according to the second depth-of-field information.
It should be noted that the foregoing description of the method embodiments is also applicable to the apparatus in the embodiments of the present application, and the implementation principles thereof are similar and will not be described herein again.
The division of each module in the background blurring processing apparatus is only used for illustration, and in other embodiments, the background blurring processing apparatus may be divided into different modules as needed to complete all or part of the functions of the background blurring processing apparatus.
To sum up, the background blurring processing apparatus according to the embodiment of the present application acquires a main image captured by a main camera and a sub-image captured by a sub-camera, detects whether a preset target object exists in the main image, determines a target area corresponding to the target object if it is detected that the target object exists, calculates first depth-of-field information of the target area by using a preset first depth-of-field algorithm according to the main image and the sub-image, acquires second depth-of-field information of a non-target area by using a preset second depth-of-field algorithm, and further performs blurring processing on a background area of the target area according to the first depth-of-field information and performs blurring processing on a background area of the non-target area according to the second depth-of-field information. Therefore, the target object is protected from being blurred during blurring processing, and the visual effect of image processing is improved.
In order to implement the above embodiments, the present application further proposes a computer device, where the computer device is any device including a memory for storing a computer program and a processor for running the computer program, such as a smart phone, a personal computer, and the like, and the computer device further includes an Image Processing circuit, and the Image Processing circuit may be implemented by using hardware and/or software components and may include various Processing units for defining an ISP (Image Signal Processing) pipeline. FIG. 11 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 11, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 11, the image processing circuit includes anISP processor 1040 andcontrol logic 1050. The image data captured by theimaging device 1010 is first processed by theISP processor 1040, and theISP processor 1040 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of theimaging device 1010. Imaging device 1010 (camera) may include a camera having one ormore lenses 1012 and animage sensor 1014, wherein to implement the background blurring processing methods of the present application,imaging device 1010 includes two sets of cameras, wherein, with continued reference to fig. 11,imaging device 1010 may simultaneously capture images of a scene based on a primary camera and a secondary camera,image sensor 1014 may include a color filter array (e.g., a Bayer filter), andimage sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel ofimage sensor 1014 and provide a set of raw image data that may be processed byISP processor 1040. Thesensor 1020 may provide raw image data to theISP processor 1040 based on thesensor 1020 interface type, wherein theISP processor 1040 may calculate depth of field information and the like based on raw image data acquired by theimage sensor 1014 in the primary camera and raw image data acquired by theimage sensor 1014 in the secondary camera provided by thesensor 1020. Thesensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
TheISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, andISP processor 1040 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1040 may also receive pixel data fromimage memory 1030. For example, raw pixel data is sent from thesensor 1020 interface to theimage memory 1030, and the raw pixel data in theimage memory 1030 is then provided to theISP processor 1040 for processing. Theimage Memory 1030 may be part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from thesensor 1020 interface or from theimage memory 1030, theISP processor 1040 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to imagememory 1030 for additional processing before being displayed.ISP processor 1040 receives processed data fromimage memory 1030 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to adisplay 1070 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output ofISP processor 1040 may also be sent to imagememory 1030, anddisplay 1070 may read image data fromimage memory 1030. In one embodiment,image memory 1030 may be configured to implement one or more frame buffers. Further, the output of theISP processor 1040 may be transmitted to the encoder/decoder 1060 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on adisplay 1070 device. The encoder/decoder 1060 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by theISP processor 1040 may be sent to thecontrol logic 1050 unit. For example, the statistical data may includeimage sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation,lens 1012 shading correction, and the like.Control logic 1050 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters ofimaging device 1010 and, in turn, control parameters based on the received statistical data. For example, the control parameters may includesensor 1020 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters,lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), andlens 1012 shading correction parameters.
The following steps are implemented by using the image processing technique in fig. 11 to realize the background blurring processing method:
acquiring a main image shot by a main camera and an auxiliary image shot by an auxiliary camera;
detecting whether a preset target object exists in the main image or not;
if the target object is detected and known to exist, determining a target area corresponding to the target object in the main image;
according to the main image and the auxiliary image, calculating first depth of field information of the target area by applying a preset first depth of field algorithm;
acquiring second depth-of-field information of a non-target area in the main image by applying a preset second depth-of-field algorithm;
and blurring the background area of the target area according to the first depth of field information, and blurring the background area of the non-target area according to the second depth of field information.
To achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium, in which instructions are enabled to perform the background blurring processing method as in the above embodiments when executed by a processor.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

Translated fromChinese
1.一种背景虚化处理方法,其特征在于,包括:1. a background blur processing method, is characterized in that, comprises:获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像;Obtain the main image captured by the main camera and the secondary image captured by the sub-camera;检测所述主图像中是否存在预设的目标对象,所述目标对象包括手势动作;Detecting whether there is a preset target object in the main image, and the target object includes gesture actions;若检测获知存在所述目标对象,则确定与所述主图像中所述目标对象对应的目标区域,所述目标区域中位于对焦区域之前的清晰成像的范围为前景区域,所述目标区域中位于对焦区域之后的清晰成像的范围为背景区域;If it is detected that the target object exists, the target area corresponding to the target object in the main image is determined, and the clear imaging range in the target area before the focus area is the foreground area, and the target area in the target area is located in the foreground area. The clear imaging range after the focus area is the background area;根据所述主图像和所述副图像,应用预设的第一景深算法计算所述目标区域的第一景深信息;Calculate the first depth of field information of the target area by applying a preset first depth of field algorithm according to the main image and the secondary image;应用预设的第二景深算法获取所述主图像中的非目标区域的第二景深信息,所述非目标区域包括前景区域与背景区域;Applying a preset second depth-of-field algorithm to obtain second depth-of-field information of a non-target area in the main image, where the non-target area includes a foreground area and a background area;根据所述第一景深信息对所述目标区域的背景区域进行虚化处理;performing blurring processing on the background area of the target area according to the first depth of field information;根据所述第二景深信息对所述非目标区域的背景区域进行虚化处理。The background area of the non-target area is blurred according to the second depth of field information.2.如权利要求1所述的方法,其特征在于,所述检测所述主图像中是否存在预设的目标对象,包括:2. The method according to claim 1, wherein the detecting whether a preset target object exists in the main image comprises:检测所述主图像的前景区域中拍摄场景的轮廓边缘;Detecting the contour edge of the shooting scene in the foreground area of the main image;将预设的模版信息与所述轮廓边缘进行匹配,若匹配成功,则检测获知所述主图像中存在预设的目标对象。The preset template information is matched with the contour edge, and if the matching is successful, it is detected and learned that there is a preset target object in the main image.3.如权利要求1所述的方法,其特征在于,所述根据所述第一景深信息对所述目标区域的背景区域进行虚化处理,包括:3. The method according to claim 1, wherein the performing a blurring process on the background area of the target area according to the first depth of field information comprises:根据所述第一景深信息和所述主图像的对焦区域确定所述目标区域的第一前景区域景深信息和第一背景区域景深信息;Determine the first foreground area depth information and the first background area depth information of the target area according to the first depth information and the focus area of the main image;根据所述第一前景区域景深信息和第一背景区域景深信息获取第一虚化程度的基础数值;Obtain the basic value of the first blurring degree according to the depth information of the first foreground area and the depth of field information of the first background area;根据所述第一虚化程度的基础数值和所述第一背景区域景深信息,确定所述目标区域的背景区域中每个像素的虚化系数;Determine the blurring coefficient of each pixel in the background area of the target area according to the basic value of the first blurring degree and the depth of field information of the first background area;根据所述每个像素的虚化系数对所述目标区域的背景区域进行高斯模糊处理。Gaussian blurring is performed on the background area of the target area according to the blurring coefficient of each pixel.4.如权利要求3所述的方法,其特征在于,所述根据所述第二景深信息对所述非目标区域的背景区域进行虚化处理,包括:4. The method according to claim 3, wherein the performing blurring processing on the background area of the non-target area according to the second depth of field information comprises:根据所述第二景深信息和所述主图像的对焦区域确定所述非目标区域的第二前景区域景深信息和第二背景区域景深信息;Determine the second foreground area depth information and the second background area depth information of the non-target area according to the second depth information and the focus area of the main image;根据所述第二前景区域景深信息和第二背景区域景深信息获取第二虚化程度的基础数值;Obtain the basic value of the second blurring degree according to the depth information of the second foreground area and the depth of field information of the second background area;根据所述第二虚化程度的基础数值对所述非目标区域的背景区域进行高斯模糊处理。Gaussian blurring is performed on the background area of the non-target area according to the basic value of the second blurring degree.5.如权利要求1-4任一所述的方法,其特征在于,在所述检测所述主图像中是否存在预设的目标对象之后,还包括:5. The method according to any one of claims 1-4, wherein after the detecting whether a preset target object exists in the main image, the method further comprises:若检测获知不存在所述目标对象,则应用所述第二景深算法计算所述主图像的第三景深信息,其中,所述第二景深算法的计算精度小于所述第一景深算法;If it is detected that the target object does not exist, the second depth of field algorithm is applied to calculate the third depth of field information of the main image, wherein the calculation accuracy of the second depth of field algorithm is smaller than that of the first depth of field algorithm;根据所述第三景深信息对所述主图像的背景区域进行虚化处理。The background area of the main image is blurred according to the third depth of field information.6.一种背景虚化处理装置,其特征在于,包括:6. A background blur processing device, characterized in that, comprising:第一获取模块,用于获取主摄像头拍摄的主图像以及副摄像头拍摄的副图像;a first acquisition module, configured to acquire the main image captured by the main camera and the sub-image captured by the sub-camera;检测模块,用于检测所述主图像中是否存在预设的目标对象,所述目标对象包括手势动作;a detection module, configured to detect whether there is a preset target object in the main image, and the target object includes gesture actions;确定模块,用于在检测获知存在所述目标对象时,确定与所述主图像中所述目标对象对应的目标区域,所述目标区域中位于对焦区域之前的清晰成像的范围为前景区域,所述目标区域中位于对焦区域之后的清晰成像的范围为背景区域;The determining module is configured to determine the target area corresponding to the target object in the main image when it is detected that the target object exists, and the clear imaging range before the focus area in the target area is the foreground area, so The clear imaging range behind the focus area in the target area is the background area;第二获取模块,用于根据所述主图像和所述副图像,应用预设的第一景深算法计算所述目标区域的第一景深信息,以及应用预设的第二景深算法获取非目标区域的第二景深信息,所述非目标区域包括前景区域与背景区域;The second acquiring module is configured to apply a preset first depth of field algorithm to calculate the first depth of field information of the target area according to the main image and the secondary image, and apply a preset second depth of field algorithm to acquire non-target areas The second depth of field information, the non-target area includes a foreground area and a background area;处理模块,用于根据所述第一景深信息对所述目标区域的背景区域进行虚化处理,以及根据所述第二景深信息对所述非目标区域的背景区域进行虚化处理。The processing module is configured to perform blurring processing on the background area of the target area according to the first depth of field information, and perform blurring processing on the background area of the non-target area according to the second depth of field information.7.如权利要求6所述的装置,其特征在于,所述检测模块包括:7. The apparatus of claim 6, wherein the detection module comprises:检测单元,用于检测所述主图像的前景区域中拍摄场景的轮廓边缘;a detection unit for detecting the contour edge of the shooting scene in the foreground area of the main image;获知单元,用于将预设的模版信息与所述轮廓边缘进行匹配,若匹配成功,则检测获知所述主图像中存在预设的目标对象。The obtaining unit is configured to match the preset template information with the contour edge, and if the matching is successful, detect and learn that there is a preset target object in the main image.8.如权利要求6所述的装置,其特征在于,所述处理模块包括:8. The apparatus of claim 6, wherein the processing module comprises:第一确定单元,用于根据所述第一景深信息和所述主图像的对焦区域确定所述目标区域的第一前景区域景深信息和第一背景区域景深信息;a first determining unit, configured to determine the first foreground area depth information and the first background area depth information of the target area according to the first depth information and the focus area of the main image;获取单元,用于根据所述第一前景区域景深信息和第一背景区域景深信息获取第一虚化程度的基础数值;an obtaining unit, configured to obtain the basic value of the first blurring degree according to the depth of field information of the first foreground area and the depth of field information of the first background area;第二确定单元,用于根据所述第一虚化程度的基础数值和所述第一背景区域景深信息,确定所述目标区域的背景区域中每个像素的虚化系数;a second determining unit, configured to determine the blurring coefficient of each pixel in the background area of the target area according to the basic value of the first blurring degree and the depth of field information of the first background area;处理单元,用于根据所述每个像素的虚化系数对所述目标区域的背景区域进行高斯模糊处理。The processing unit is configured to perform Gaussian blurring on the background area of the target area according to the blurring coefficient of each pixel.9.一种计算机设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-5中任一项所述的背景虚化处理方法。9. A computer device, characterized in that it comprises a memory, a processor and a computer program stored on the memory and running on the processor, and when the processor executes the program, it realizes as in claims 1-5. Any one of the background blur processing methods.10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-5中任一项所述的背景虚化处理方法。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the background blur processing method according to any one of claims 1-5 is implemented.
CN201711242468.1A2017-11-302017-11-30 Background blur processing method, device and equipmentActiveCN107945105B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201711242468.1ACN107945105B (en)2017-11-302017-11-30 Background blur processing method, device and equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201711242468.1ACN107945105B (en)2017-11-302017-11-30 Background blur processing method, device and equipment

Publications (2)

Publication NumberPublication Date
CN107945105A CN107945105A (en)2018-04-20
CN107945105Btrue CN107945105B (en)2021-05-25

Family

ID=61948126

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201711242468.1AActiveCN107945105B (en)2017-11-302017-11-30 Background blur processing method, device and equipment

Country Status (1)

CountryLink
CN (1)CN107945105B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109151314B (en)*2018-09-102020-02-07珠海格力电器股份有限公司Camera blurring processing method and device for terminal, storage medium and terminal
CN110956577A (en)*2018-09-272020-04-03Oppo广东移动通信有限公司 Control method of electronic device, electronic device, and computer-readable storage medium
CN111311482B (en)*2018-12-122023-04-07Tcl科技集团股份有限公司Background blurring method and device, terminal equipment and storage medium
CN110349080B (en)*2019-06-102023-07-04北京迈格威科技有限公司Image processing method and device
CN110363702B (en)*2019-07-102023-10-20Oppo(重庆)智能科技有限公司Image processing method and related product
CN114514735B (en)*2019-12-092023-10-03Oppo广东移动通信有限公司 Electronic device and method of controlling electronic device
CN113014791B (en)*2019-12-202023-09-19中兴通讯股份有限公司Image generation method and device
CN111246093B (en)*2020-01-162021-07-20Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic device
CN111803070A (en)*2020-06-192020-10-23浙江大华技术股份有限公司Height measuring method and electronic equipment
CN112532882B (en)*2020-11-262022-09-16维沃移动通信有限公司Image display method and device
WO2022198525A1 (en)*2021-03-242022-09-29Guangdong Oppo Mobile Telecommunications Corp., Ltd.Method of improving stability of bokeh processing and electronic device
CN114125296B (en)*2021-11-242024-08-09维沃移动通信有限公司Image processing method, device, electronic equipment and readable storage medium
CN116263965A (en)*2022-01-262023-06-16北京极感科技有限公司 A depth image generation method and image processing method
CN115134532A (en)*2022-07-262022-09-30Oppo广东移动通信有限公司Image processing method, image processing device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104751405A (en)*2015-03-112015-07-01百度在线网络技术(北京)有限公司Method and device for blurring image
CN105578070A (en)*2015-12-212016-05-11深圳市金立通信设备有限公司Image processing method and terminal
CN105979165A (en)*2016-06-022016-09-28广东欧珀移动通信有限公司Blurred photos generation method, blurred photos generation device and mobile terminal
CN106060423A (en)*2016-06-022016-10-26广东欧珀移动通信有限公司 Method, device and mobile terminal for generating blurred photos
CN106993112A (en)*2017-03-092017-07-28广东欧珀移动通信有限公司 Background virtualization method and device based on depth of field and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105303514B (en)*2014-06-172019-11-05腾讯科技(深圳)有限公司Image processing method and device
US10284835B2 (en)*2015-09-042019-05-07Apple Inc.Photo-realistic shallow depth-of-field rendering from focal stacks
CN107343144A (en)*2017-07-102017-11-10广东欧珀移动通信有限公司 Dual camera switching processing method, device and equipment thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104751405A (en)*2015-03-112015-07-01百度在线网络技术(北京)有限公司Method and device for blurring image
CN105578070A (en)*2015-12-212016-05-11深圳市金立通信设备有限公司Image processing method and terminal
CN105979165A (en)*2016-06-022016-09-28广东欧珀移动通信有限公司Blurred photos generation method, blurred photos generation device and mobile terminal
CN106060423A (en)*2016-06-022016-10-26广东欧珀移动通信有限公司 Method, device and mobile terminal for generating blurred photos
CN106993112A (en)*2017-03-092017-07-28广东欧珀移动通信有限公司 Background virtualization method and device based on depth of field and electronic device

Also Published As

Publication numberPublication date
CN107945105A (en)2018-04-20

Similar Documents

PublicationPublication DateTitle
CN107945105B (en) Background blur processing method, device and equipment
CN107948519B (en)Image processing method, device and equipment
CN107977940B (en)Background blurring processing method, device and equipment
EP3480784B1 (en)Image processing method, and device
KR102279436B1 (en) Image processing methods, devices and devices
US10825146B2 (en)Method and device for image processing
CN109712192B (en)Camera module calibration method and device, electronic equipment and computer readable storage medium
CN108111749B (en)Image processing method and device
CN108024057B (en)Background blurring processing method, device and equipment
CN108154514B (en) Image processing method, device and equipment
CN108053363A (en)Background blurring processing method, device and equipment
CN108024058B (en)Image blurring processing method and device, mobile terminal and storage medium
CN107872631B (en)Image shooting method and device based on double cameras and mobile terminal
US9619886B2 (en)Image processing apparatus, imaging apparatus, image processing method and program
CN108053438B (en) Depth of field acquisition method, device and device
CN111932587A (en)Image processing method and device, electronic equipment and computer readable storage medium
CN109559353B (en) Camera module calibration method, device, electronic device, and computer-readable storage medium
CN107948618A (en)Image processing method, image processing device, computer-readable storage medium and computer equipment
HK1251749A1 (en)Image processing method, device and equipment
HK1251749B (en)Image processing method, device and equipment
HK1249686B (en)Image processing method, apparatus and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information
CB02Change of applicant information

Address after:Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after:GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before:Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before:GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp