Movatterモバイル変換


[0]ホーム

URL:


CN112348958A - Method, device and system for acquiring key frame image and three-dimensional reconstruction method - Google Patents

Method, device and system for acquiring key frame image and three-dimensional reconstruction method
Download PDF

Info

Publication number
CN112348958A
CN112348958ACN202011291713.XACN202011291713ACN112348958ACN 112348958 ACN112348958 ACN 112348958ACN 202011291713 ACN202011291713 ACN 202011291713ACN 112348958 ACN112348958 ACN 112348958A
Authority
CN
China
Prior art keywords
shooting position
image
key frame
target object
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011291713.XA
Other languages
Chinese (zh)
Other versions
CN112348958B (en
Inventor
杜峰
严庆安
刘享军
郭复胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co LtdfiledCriticalBeijing Jingdong Century Trading Co Ltd
Priority to CN202011291713.XApriorityCriticalpatent/CN112348958B/en
Publication of CN112348958ApublicationCriticalpatent/CN112348958A/en
Priority to PCT/CN2021/119860prioritypatent/WO2022105415A1/en
Application grantedgrantedCritical
Publication of CN112348958BpublicationCriticalpatent/CN112348958B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本公开涉及一种关键帧图像的采集方法、装置、系统和三维重建方法,涉及计算机技术领域。该采集方法包括:根据图像获取设备在当前拍摄位置获取的目标物体的多帧深度图像和图像获取设备的内部参数,确定目标物体的点云数据;根据目标物体的点云数据的位置信息、图像获取设备的内部参数和在当前拍摄位置的外部参数,建立能够包围目标物体的三维模型,三维模型的表面被划分为多个网格,每一个网格对应一个拍摄位置;根据各拍摄位置对应的网格、在各拍摄位置获取的多帧深度图像的拍摄位置和拍摄方向,在各拍摄位置的多帧深度图像中确定目标物体在相应的拍摄位置的关键帧图像。

Figure 202011291713

The present disclosure relates to a method, device, system and three-dimensional reconstruction method for collecting key frame images, and relates to the field of computer technology. The acquisition method includes: determining the point cloud data of the target object according to the multi-frame depth images of the target object acquired by the image acquisition device at the current shooting position and the internal parameters of the image acquisition device; Obtain the internal parameters of the device and the external parameters at the current shooting position, and establish a three-dimensional model that can surround the target object. The surface of the three-dimensional model is divided into multiple grids, and each grid corresponds to a shooting position; The grid, the shooting position and shooting direction of the multi-frame depth images obtained at each shooting position, and the key frame image of the target object at the corresponding shooting position is determined from the multi-frame depth images of each shooting position.

Figure 202011291713

Description

Method, device and system for acquiring key frame image and three-dimensional reconstruction method
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for acquiring a key frame image, a system for acquiring a key frame image, a method and an apparatus for three-dimensional reconstruction of a key frame image, and a non-volatile computer-readable storage medium.
Background
Three-dimensional reconstruction techniques are a research hotspot in the fields of computer graphics and computer vision. Before the advent of consumer-grade depth cameras, the input data for three-dimensional reconstruction was typically only RGB (Red Green Blue) images. Based on a series of RGB images shot at different angles, a three-dimensional model of the object is restored through a stereoscopic vision algorithm. These RGB images are key frames on which three-dimensional reconstruction relies. In order to reconstruct a 3D model of an object accurately and completely, the angle covered by the keyframes must be complete, and the keyframes must not be too blurred.
With the advent of various consumer-grade depth cameras, three-dimensional reconstruction techniques based on depth cameras have been rapidly developed. With the aid of the depth image acquired by the depth camera, a three-dimensional model of the object can be rapidly restored. In order to restore the reality of the three-dimensional model, the real texture of the object also needs to be restored by texture mapping. Texture mapping relies on a series of key frames covering various angles of the three-dimensional model.
Therefore, regardless of the three-dimensional reconstruction technique, it is essential to obtain a high-quality three-dimensional model to cover the keyframes with a complete and sufficiently clear angle.
In the related art, time sampling can be performed according to the time interval and the blurring degree of each frame of image so as to select key frame images of each angle; multiple cameras can also be erected around the target object to acquire key frame images of various angles.
Disclosure of Invention
The inventors of the present disclosure found that the following problems exist in the above-described related art: the shooting angle covered by the key frame image is not comprehensive, so that the quality of key frame image acquisition is reduced; or the cost of acquiring a full angle keyframe image is prohibitive.
In view of this, the present disclosure provides a key frame image acquisition technical solution, which can acquire a key frame image of a complete shooting angle without increasing cost, thereby improving the acquisition quality of the key frame image.
According to some embodiments of the present disclosure, there is provided a method for acquiring a key frame image, including: determining point cloud data of a target object according to a multi-frame depth image of the target object acquired by an image acquisition device at a current shooting position and internal parameters of the image acquisition device; establishing a three-dimensional model capable of surrounding a target object according to position information of point cloud data of the target object, internal parameters of image acquisition equipment and external parameters at a current shooting position, wherein the surface of the three-dimensional model is divided into a plurality of grids, and each grid corresponds to one shooting position; and determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth images at the shooting positions according to the grids corresponding to the shooting positions, the shooting positions and the shooting directions of the multi-frame depth images acquired at the shooting positions.
In some embodiments, creating a three-dimensional model that can enclose the target object comprises: calculating a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system according to the internal parameters of the image acquisition equipment; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
In some embodiments, creating a three-dimensional model that can enclose the target object comprises: performing plane fitting according to the point cloud data of the target object to determine a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane.
In some embodiments, building the three-dimensional model based on the position of the target object on the planar model comprises: determining the position of a circle center according to the position of a target object on the plane model; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
In some embodiments, determining the key frame image of the target object at the corresponding shooting position in the multiple frames of depth images at the shooting positions comprises: determining a key frame image of the current shooting position according to a first ray, a second ray and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, wherein the second ray is a ray which points to the central point of the three-dimensional model on the plane of the target object from the position of the image acquisition equipment in the world coordinate system as a starting point, and the first ray points to the shooting direction of the image acquisition equipment at the current shooting position.
In some embodiments, determining the key frame image of the current shooting position according to the first ray, the second ray of the multi-frame depth image of the current shooting position and the grid position corresponding to the current shooting position includes: under the condition that a first ray and a second ray of a certain frame depth image of the current shooting position both penetrate through a grid corresponding to the current shooting position, determining the frame depth image as a candidate image of the current shooting position; and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, determining the key frame image of the current photographing position in each candidate image according to the degree of sharpness of each candidate image includes: calculating edge characteristic information of each candidate image by using an edge detection operator; and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
In some embodiments, determining the key frame image of the target object at the corresponding shooting position in the multiple frames of depth images at the shooting positions comprises: under the condition that a candidate image with the definition larger than a threshold exists, determining each corresponding candidate image as each key frame image, taking the next shooting position of the image acquisition equipment as the current shooting position, and determining the corresponding key frame image in the multi-frame depth image of the current shooting position; and under the condition that no candidate image with the definition larger than the threshold exists, acquiring the multi-frame depth images again at the current shooting position until the candidate image with the definition larger than the threshold exists.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, the acquisition method further comprises: responding to the image acquisition equipment to acquire a plurality of frames of depth images at the current shooting position, and performing first marking processing on a corresponding grid at the current shooting position on the three-dimensional model; and responding to the key frame image of the current shooting position, and performing second marking processing on the corresponding grid of the current shooting position on the three-dimensional model to identify that the key frame image of the current shooting position is completely acquired.
In some embodiments, the acquisition method further comprises: and performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to other embodiments of the present disclosure, there is provided a method for three-dimensional reconstruction of a key frame image, including: acquiring the key frame image of the target object at each shooting position by using the key frame image acquisition method of any one of the embodiments; and performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still other embodiments of the present disclosure, there is provided an apparatus for acquiring a key frame image, including: the point cloud determining unit is used for determining point cloud data of the target object according to the multi-frame depth image of the target object acquired by the image acquiring device at the current shooting position and the internal parameters of the image acquiring device; the system comprises an establishing unit, a processing unit and a processing unit, wherein the establishing unit is used for establishing a three-dimensional model capable of surrounding a target object according to position information of point cloud data of the target object, internal parameters of image acquisition equipment and external parameters at a current shooting position, the surface of the three-dimensional model is divided into a plurality of grids, and each grid corresponds to one shooting position; and the key frame determining unit is used for determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth images at the shooting positions according to the grids corresponding to the shooting positions, the shooting positions and the shooting directions of the multi-frame depth images acquired at the shooting positions.
In some embodiments, the building unit calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to internal parameters of the image acquisition device; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
In some embodiments, the establishing unit performs plane fitting according to the point cloud data of the target object to determine a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishing unit determines the circle center position according to the position of the target object on the plane model; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
In some embodiments, the key frame determining unit determines the key frame image of the current shooting position according to a first ray, a second ray and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, the second ray is a ray pointing to a center point of the three-dimensional model in a plane where the target object is located, the position of the image obtaining device in a world coordinate system is taken as a starting point, and the first ray points to a shooting direction of the image obtaining device in the current shooting position.
In some embodiments, the key frame determining unit determines a frame depth image as a candidate image of the current shooting position when both a first ray and a second ray of the frame depth image of the current shooting position pass through a grid corresponding to the current shooting position; and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, the key frame determination unit calculates edge feature information of each candidate image using an edge detection operator; and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
In some embodiments, the key frame determining unit determines, in the presence of candidate images with a definition greater than a threshold, respective candidate images as key frame images, determines, with a next shooting position of the image acquisition device as a current shooting position, a respective key frame image in a multi-frame depth image of the current shooting position; and under the condition that no candidate image with the definition larger than the threshold exists, acquiring the multi-frame depth images again at the current shooting position until the candidate image with the definition larger than the threshold exists.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, the acquisition device further comprises: and the marking unit is used for responding to the image acquisition equipment to acquire multi-frame depth images at the current shooting position, carrying out first marking processing on the corresponding grid of the current shooting position on the three-dimensional model, responding to the key frame image of the current shooting position, and carrying out second marking processing on the corresponding grid of the current shooting position on the three-dimensional model, and is used for marking that the key frame image of the current shooting position is completely acquired.
In some embodiments, the acquisition device further comprises: and the reconstruction unit is used for performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still further embodiments of the present disclosure, there is provided a three-dimensional reconstruction apparatus of a key frame image, including: the acquisition unit is used for executing the acquisition method of the key frame image of any one of the embodiments and acquiring the key frame image of the target object at each shooting position; and the reconstruction unit is used for performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still further embodiments of the present disclosure, there is provided an electronic device including: a memory; and a processor coupled to the memory, the processor configured to perform the method for acquiring a keyframe image or the method for three-dimensional reconstruction of a keyframe image in any of the above embodiments based on instructions stored in the memory device.
According to still further embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for acquiring a keyframe image or the method for three-dimensional reconstruction of a keyframe image in any of the above-mentioned embodiments.
According to still further embodiments of the present disclosure, there is provided a key frame image acquisition system including: a key frame image acquisition device for executing the key frame image acquisition method in any of the above embodiments; and the image acquisition equipment is used for acquiring multi-frame depth images of the target object at different shooting positions.
In the above embodiment, a three-dimensional model capable of surrounding the target object is established, and the collection of the keyframe image is guided according to the grids corresponding to the shooting positions on the surface of the three-dimensional model. Therefore, the key frame image acquisition direction can be synchronously guided when the object is scanned, so that the acquired key frame can completely cover each shooting angle, and the acquisition quality of the key frame image is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure can be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 illustrates a flow diagram of some embodiments of a method of capturing keyframe images of the present disclosure;
FIG. 2 illustrates a flow diagram of some embodiments ofstep 130 in FIG. 1;
FIG. 3 illustrates a schematic diagram of some embodiments of a method of capturing keyframe images of the present disclosure;
FIG. 4 illustrates a flow diagram of some embodiments of a method of three-dimensional reconstruction of keyframe images of the present disclosure;
FIG. 5 illustrates a block diagram of some embodiments of an acquisition apparatus of key frame images of the present disclosure;
FIG. 6 illustrates a block diagram of some embodiments of an apparatus for three-dimensional reconstruction of keyframe images of the present disclosure;
FIG. 7 illustrates a block diagram of some embodiments of an electronic device of the present disclosure;
FIG. 8 shows a block diagram of further embodiments of an electronic device of the present disclosure;
fig. 9 illustrates a block diagram of some embodiments of a keyframe image acquisition system of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In view of the above technical problems, in the embodiment of the present disclosure, a user may scan a circle around a target object by holding a mobile phone or a tablet computer carrying a depth camera, and a certain number of depth images are collected at each shooting position.
Therefore, the acquired depth image can cover all angles of the object, and a sufficiently clear depth image can be selected as a key frame image in a certain shooting angle range, so that data redundancy is reduced.
In addition, in the acquisition process, the key frame images can be synchronously determined from the depth images acquired from all shooting positions, so that the acquisition efficiency of the key frame images is improved; and visual feedback is given to the user, so that the user can immediately know whether the corresponding shooting position and angle have collected the key frame image, and the key frame image can cover a complete angle range.
In addition, the technical scheme of the disclosure does not need to erect a camera array, and the hardware cost is reduced. For example, the technical solution of the present disclosure can be realized by the following embodiments.
Fig. 1 illustrates a flow diagram of some embodiments of a method of capturing keyframe images of the present disclosure.
As shown in fig. 1, the method includes:step 110, determining point cloud data of a target object;step 120, establishing a three-dimensional model surrounding the target object; and step 130, determining a key frame image.
Instep 110, point cloud data of the target object is determined according to the multi-frame depth image of the target object acquired by the image acquisition device at the current shooting position and the internal parameters of the image acquisition device. For example, it is possible to acquire a plurality of frames of depth images at the current photographing position, or to adjust the orientation of the image capturing apparatus so as to acquire a plurality of frames of depth images from different photographing directions.
In some embodiments, an object to be scanned, i.e., a target object, may be placed on a suitable plane (e.g., a desktop, a ground, etc.), and a user holds a mobile phone or a tablet computer, etc. with a depth camera, and aligns the object to collect corresponding RGB-D (Red Green Blue-Deep) data; and calculating point cloud data required by three-dimensional reconstruction according to the acquired RGB-D data.
For example, the image acquisition device is a depth camera, and the internal parameter matrix is:
Figure BDA0002783985490000081
fxand fyIs the focal length of the depth camera, cxAnd cyAre the principal point coordinates of the depth camera. The internal parameter matrix does not change with the position movement of the depth camera。
The depth value at a point (u, v) on a certain frame depth image is d, and the point cloud coordinate corresponding to the point can be determined as:
Figure BDA0002783985490000082
instep 120, a three-dimensional model capable of surrounding the target object is built according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device, and the external parameters at the current shooting position. The surface of the three-dimensional model is divided into a plurality of meshes, and each mesh corresponds to one shooting position.
In some embodiments, after Point cloud data of each frame of depth image of the current shooting position is obtained, an attitude matrix (i.e., camera external parameters) corresponding to each frame of depth image may be solved through algorithms such as ICP (Iterative Closest Point):
Figure BDA0002783985490000091
that is, the camera extrinsic parameters consist of the rotation matrix R and the translation column vector T:
Figure BDA0002783985490000092
the camera extrinsic parameter is related to a shooting position.
In some embodiments, performing plane fitting according to the point cloud data of the target object, and determining a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane. For example, according to the position of the target object on the plane model, the circle center position is determined; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
For example, after sufficient point cloud data is acquired, plane fitting processing may be performed by using an algorithm such as RANSAC (RANdom SAmple Consensus).
Firstly, randomly selecting three points from point cloud data, and determining a candidate plane according to the three points; then, calculating the distances from all other points to the candidate plane; if the distance is smaller than the distance threshold value, the corresponding point is considered to be in the candidate plane and is marked as an inner point; otherwise, marking as an outlier; if the number of the interior points exceeds a number threshold, saving the candidate plane as a plane model; if the number of interior points does not exceed the number threshold, three points are reselected to determine a candidate plane.
For example, after obtaining a plane on which the target object is placed, the three-dimensional hemispherical model may be rendered to a corresponding position on the plane (e.g., efficiently rendered using openGL), so that the three-dimensional hemispherical model can enclose the target object therein.
In order to ensure that the rendered three-dimensional hemisphere model is consistent with a physical space (namely a world coordinate system), namely that the position of the three-dimensional hemisphere model in the world coordinate system is not changed along with the position of a camera, the three-dimensional hemisphere model can be established by using a real-time projection matrix and a posture matrix of the current shooting position.
In some embodiments, a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system is calculated based on internal parameters of the image acquisition device; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
For example, according to the internal parameter matrix of the depth camera, a projection matrix P corresponding to the current shooting position is calculated:
Figure BDA0002783985490000101
width and height are the width and height, respectively, of the depth image, far is the current far plane of the depth camera, near is the current near plane.
Sets of vertices on the surface of the three-dimensional model may be provided, each set of vertices may define a mesh. When the coordinate of a certain set of vertices is M, the rendering time is gl _ Position ═ P × V × M. According to the corresponding gl _ Position values of all groups of vertexes, a three-dimensional model of the surface divided into a plurality of grids can be established. The grids corresponding to different shooting positions are used as a part of a three-dimensional model UI (User Interface), can be used as a screening basis of key frame images, and can also be used for guiding the collection of key frame images covering a complete angle range.
Instep 130, a key frame image of the target object at the corresponding shooting position is determined in the multi-frame depth images at the shooting positions according to the grids corresponding to the shooting positions, the shooting positions of the multi-frame depth images obtained at the shooting positions and the shooting directions.
In some embodiments, the key frame images may be determined according to the embodiment in fig. 2.
Fig. 2 illustrates a flow diagram of some embodiments ofstep 130 in fig. 1.
As shown in fig. 2,step 130 includes:step 1310, determining a first ray and a second ray of a current shooting position;step 1320, determining the definition of each candidate image;step 1330, determining whether the sharpness is greater than a threshold;step 1340, acquiring the depth image again;step 1350, determining key frame images; and astep 1360 of moving image obtaining apparatus.
Instep 1310, a first ray is determined according to the shooting direction of the current shooting position by taking the position of the image acquisition device in the world coordinate system as a starting point; and determining a ray pointing to the central point of the three-dimensional model on the plane of the target object as a second ray by taking the position of the image acquisition equipment in the world coordinate system as a starting point.
The first ray represents shooting direction information of the image acquisition equipment, and the second ray represents current position information of the image acquisition equipment and can be used as a basis for screening key frame images.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, the key frame image of the current shooting position is determined according to the first ray, the second ray and the grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position. For example, candidate images for the key frame image may be determined by the embodiment in fig. 3.
Fig. 3 illustrates a schematic diagram of some embodiments of a method of acquiring a keyframe image of the present disclosure.
As shown in FIG. 3, the object is completely covered in the three-dimensional hemisphere model, and the first ray of the image acquisition device is l1The second ray is l2
In some embodiments, the position p of the depth camera in the world coordinate system may be derived from the extrinsic parameters R and TcameraAnd camera orientation dcamera
Figure BDA0002783985490000111
The orientation d of the depth camera is determined by taking the current position of the depth camera as an end pointcameraIs a direction ray l1As an orientation ray; taking the current position of the depth camera as an end point, the depth camera and the center of sphere poIs a connecting line po-pcameraIs a direction ray l2As a position ray. The keyframe images can be screened according to the grid corresponding to the current shooting position, the camera orientation and the connecting line direction of the camera and the sphere center.
In some embodiments, in the case that both the first ray and the second ray of a certain frame of depth image of the current shooting position pass through the grid corresponding to the current shooting position, the certain frame of depth image is determined as a candidate image of the current shooting position. For example, ray l in the figure1Ray l2And the depth images pass through the grids corresponding to the current shooting position, and the corresponding depth images are determined as candidate images when the position ray and the orientation ray of the current depth camera are aligned to the target object.
After the candidate images are determined, the key frame image of the current shooting position can be determined in each candidate image according to the definition degree of each candidate image through the remaining steps in fig. 2.
Instep 1320, the sharpness of each candidate image is determined.
In some embodiments, edge feature information of each candidate image is calculated by using an edge detection operator (such as a Laplace operator, a Sobel operator, a Canny operator, and the like); and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
For example, the clarity detection may be performed using the Laplace operator. The image can be converted into a gray scale image, and the Laplace operator is used for processing; and calculating the variance of the processing result of the Laplace operator as the definition of the current image.
Instep 1330, it is determined whether the sharpness of each candidate image is greater than a threshold. In the case that there is no candidate image with a definition greater than the threshold, performingstep 1340; in the case where there is a candidate image with a sharpness greater than the threshold,step 1350 is performed.
Instep 1340, acquiring multiple frames of depth images again at the current shooting position, and repeatedly executing steps 1310-1330 until candidate images with the definition larger than the threshold exist.
Instep 1350, candidate images with sharpness greater than the threshold are determined as key frame images.
Instep 1360, the image capture device is moved, and a plurality of frames of depth images are captured again (step 1340). And determining a corresponding key frame image in the multi-frame depth image of the current shooting position by taking the next shooting position of the image acquisition equipment as the current shooting position (repeatedly executing the steps 1310-1360).
In some embodiments, a three-dimensional model rendered in the vicinity of a target object may be directed as a collection UI for a key in image.
In some embodiments, in response to the image acquisition device acquiring a plurality of frames of depth images at the current shooting position, the first marking process is performed on the three-dimensional model for the corresponding mesh of the current shooting position. For example, when the depth camera is aimed at a certain capture angle, the corresponding grid may be illuminated (e.g., changed in color, etc.) to indicate the coverage angle of the currently acquired keyframe image.
In some embodiments, in response to determining the key frame image of the current shooting position, performing a second labeling process on the corresponding grid of the current shooting position on the three-dimensional model for identifying that the key frame image of the current shooting position is completely acquired.
For example, if the definition is greater than the threshold, the key frame image is saved as the key frame image, the color of the corresponding grid of the key frame image is changed to prompt that the key frame image is acquired at the current angle, and the camera can move to the next shooting position to continue the acquisition. When all the angles acquire the key frame images, all the grids change colors, which indicates that the key frame images covering the complete angle range are acquired, and the scanning can be completed according to the requirements.
Fig. 4 illustrates a flow diagram of some embodiments of a method of three-dimensional reconstruction of a keyframe image of the present disclosure.
As shown in fig. 4, instep 410, a key frame image of the target object at each shooting position is captured by using the key frame image capturing method according to any of the above embodiments.
Instep 420, a three-dimensional reconstruction of the target object is performed based on the key frame images of the target object at the respective shooting positions.
Thus, the effect of three-dimensional reconstruction can be improved because the key frame image covering the complete angle range is obtained.
Fig. 5 illustrates a block diagram of some embodiments of an acquisition apparatus of a key frame image of the present disclosure.
As shown in fig. 5, the key frameimage acquisition device 5 includes a pointcloud determination unit 51, acreation unit 52, and a keyframe determination unit 53.
The pointcloud determining unit 51 determines point cloud data of the target object based on the multi-frame depth image of the target object acquired by the image acquiring apparatus at the current photographing position and the internal parameters of the image acquiring apparatus.
Thebuilding unit 52 builds a three-dimensional model capable of surrounding the target object, based on the position information of the point cloud data of the target object, the internal parameters of the image acquisition apparatus, and the external parameters at the current shooting position. The surface of the three-dimensional model is divided into a plurality of meshes, and each mesh corresponds to one shooting position.
The keyframe determining unit 53 determines a key frame image of the target object at the corresponding shooting position from the multi-frame depth images at the respective shooting positions based on the mesh corresponding to the respective shooting positions, and the shooting positions and shooting directions of the multi-frame depth images acquired at the respective shooting positions.
In some embodiments, thebuilding unit 52 calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquisition device; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
In some embodiments, the establishingunit 52 performs plane fitting according to the point cloud data of the target object to determine a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishingunit 52 determines the circle center position according to the position of the target object on the plane model; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
In some embodiments, the keyframe determining unit 53 determines the key frame image of the current shooting position according to the first ray, the second ray of the multi-frame depth image of the current shooting position, and the grid position corresponding to the current shooting position. The second ray is a ray which takes the position of the image acquisition equipment in the world coordinate system as a starting point and points to the central point of the three-dimensional model in the plane where the target object is located, and the first ray points to the shooting direction of the image acquisition equipment at the current shooting position.
In some embodiments, the keyframe determining unit 53 determines a frame depth image of the current shooting position as a candidate image of the current shooting position when both the first ray and the second ray of the frame depth image pass through the grid corresponding to the current shooting position; and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, the keyframe determining unit 53 calculates edge feature information of each candidate image using an edge detection operator; and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
In some embodiments, in the presence of candidate images with a definition greater than a threshold, the keyframe determining unit 53 determines respective candidate images as key frame images, determines respective key frame images in the multiple frames of depth images at a current shooting position with a next shooting position of the image acquisition apparatus as the current shooting position; and under the condition that no candidate image with the definition larger than the threshold exists, acquiring the multi-frame depth images again at the current shooting position until the candidate image with the definition larger than the threshold exists.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, theacquisition device 5 further comprises: a markingunit 54 configured to perform a first marking process on a corresponding mesh of a current shooting position on the three-dimensional model in response to the image acquisition apparatus acquiring a plurality of frames of depth images at the current shooting position; and responding to the key frame image of the current shooting position, and performing second marking processing on the corresponding grid of the current shooting position on the three-dimensional model to identify that the key frame image of the current shooting position is completely acquired.
In some embodiments, theacquisition device 5 further comprises: and thereconstruction unit 55 is configured to perform three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
Fig. 6 illustrates a block diagram of some embodiments of an apparatus for three-dimensional reconstruction of keyframe images of the present disclosure.
As shown in fig. 6, the three-dimensional reconstruction device 6 for a key frame image includes: anacquisition unit 61, configured to execute the method for acquiring a key frame image according to any one of the embodiments, and acquire key frame images of a target object at each shooting position; and areconstruction unit 62, configured to perform three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
Fig. 7 illustrates a block diagram of some embodiments of an electronic device of the present disclosure.
As shown in fig. 7, theelectronic apparatus 7 of this embodiment includes: amemory 71 and aprocessor 72 coupled to thememory 71, wherein theprocessor 72 is configured to execute the method for acquiring a key frame image or the method for three-dimensional reconstruction of a key frame image according to any of the above embodiments based on instructions stored in thememory 71.
Thememory 71 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader, a database, and other programs.
Fig. 8 shows a block diagram of further embodiments of the electronic device of the present disclosure.
As shown in fig. 8, theelectronic apparatus 8 of this embodiment includes: amemory 810 and aprocessor 820 coupled to thememory 810, theprocessor 820 being configured to execute the method for acquiring a keyframe image or the method for three-dimensional reconstruction of a keyframe image in any of the above embodiments based on instructions stored in thememory 810.
Memory 810 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader, and other programs.
Theelectronic device 8 may also include an input-output interface 830, anetwork interface 840, astorage interface 850, and the like. Theseinterfaces 830, 840, 850 and between thememory 810 and theprocessor 820 may be connected, for example, by abus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen, a microphone, and a speaker. Thenetwork interface 840 provides a connection interface for various networking devices. Thestorage interface 850 provides a connection interface for external storage devices such as an SD card and a usb disk.
Fig. 9 illustrates a block diagram of some embodiments of a keyframe image acquisition system of the present disclosure.
As shown in fig. 9, the key frameimage acquisition system 9 includes: a key frame image acquisition device 91 for executing the key frame image acquisition method in any of the above embodiments; and theimage acquisition device 92 is used for acquiring multi-frame depth images of the target object at different shooting positions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media having computer-usable program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
So far, the acquisition method of a key frame image, the acquisition apparatus of a key frame image, the acquisition system of a key frame image, the three-dimensional reconstruction method of a key frame image, the three-dimensional reconstruction apparatus of a key frame image, and the non-volatile computer-readable storage medium according to the present disclosure have been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (19)

Translated fromChinese
1.一种关键帧图像的采集方法,包括:1. A method for collecting key frame images, comprising:根据图像获取设备在当前拍摄位置获取的目标物体的多帧深度图像和所述图像获取设备的内部参数,确定所述目标物体的点云数据;Determine the point cloud data of the target object according to the multi-frame depth images of the target object acquired by the image acquisition device at the current shooting position and the internal parameters of the image acquisition device;根据所述目标物体的点云数据的位置信息、所述图像获取设备的内部参数和在当前拍摄位置的外部参数,建立能够包围所述目标物体的三维模型,所述三维模型的表面被划分为多个网格,每一个网格对应一个拍摄位置;According to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device and the external parameters at the current shooting position, a three-dimensional model capable of surrounding the target object is established, and the surface of the three-dimensional model is divided into Multiple grids, each grid corresponds to a shooting position;根据各拍摄位置对应的网格、在各拍摄位置获取的多帧深度图像的拍摄位置和拍摄方向,在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像。According to the grid corresponding to each shooting position, the shooting position and shooting direction of the multi-frame depth images obtained at each shooting position, determine the key of the target object at the corresponding shooting position in the multi-frame depth images of each shooting position frame image.2.根据权利要求1所述的采集方法,其中,所述建立能够包围所述目标物体的三维模型包括:2. The acquisition method according to claim 1, wherein the establishing a three-dimensional model capable of surrounding the target object comprises:根据所述图像获取设备的内部参数,计算所述三维模型从三维坐标系到二维坐标系的投影矩阵;Calculate the projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquisition device;根据所述投影矩阵、各网格的顶点的位置信息、所述图像获取设备在所述各网格对应的拍摄位置的外部参数,建立所述三维模型。The three-dimensional model is established according to the projection matrix, the position information of the vertices of each grid, and the external parameters of the image acquisition device at the shooting position corresponding to each grid.3.根据权利要求1所述的采集方法,其中,所述建立能够包围所述目标物体的三维模型包括:3. The acquisition method according to claim 1, wherein the establishing a three-dimensional model capable of surrounding the target object comprises:根据所述目标物体的点云数据进行平面拟合,确定所述目标物体所在的平面;Perform plane fitting according to the point cloud data of the target object to determine the plane where the target object is located;根据所述目标物体在所述平面上的位置,建立所述三维模型。The three-dimensional model is established according to the position of the target object on the plane.4.根据权利要求3所述的采集方法,其中,所述根据所述目标物体在所述平面模型上的位置,建立所述三维模型包括:4. The acquisition method according to claim 3, wherein the establishing the three-dimensional model according to the position of the target object on the plane model comprises:所述根据所述目标物体在所述平面模型上的位置,确定圆心位置;determining the position of the center of the circle according to the position of the target object on the plane model;根据所述圆心位置,在所述平面模型上建立三维半球模型作为所述三维模型。According to the position of the center of the circle, a three-dimensional hemisphere model is established on the plane model as the three-dimensional model.5.根据权利要求1所述的采集方法,其中,所述在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像包括:5. The acquisition method according to claim 1, wherein the determining of the key frame images of the target object at the corresponding shooting positions in the multi-frame depth images of the shooting positions comprises:根据所述当前拍摄位置的多帧深度图像的第一射线、第二射线、所述当前拍摄位置对应的网格位置,确定所述当前拍摄位置的关键帧图像,According to the first ray, the second ray of the multi-frame depth images of the current shooting position, and the grid position corresponding to the current shooting position, the key frame image of the current shooting position is determined,所述第一射线指向所述图像获取设备在所述当前拍摄位置的拍摄方向,the first ray points to the shooting direction of the image acquisition device at the current shooting position,所述第二射线为以所述图像获取设备在世界坐标系的位置为起点,指向所述三维模型在所述目标物体所在平面的中心点的射线。The second ray is a ray that takes the position of the image acquisition device in the world coordinate system as a starting point and points to the center point of the three-dimensional model on the plane where the target object is located.6.根据权利要求5所述的采集方法,其中,所述根据所述当前拍摄位置的多帧深度图像的第一射线、第二射线、所述当前拍摄位置对应的网格位置,确定所述当前拍摄位置的关键帧图像包括:6 . The acquisition method according to claim 5 , wherein the determination of the said Keyframe images for the current shooting position include:在所述当前拍摄位置的某帧深度图像的第一射线、第二射线均穿过所述当前拍摄位置对应的网格的情况下,将该帧深度图像确定为所述当前拍摄位置的候选图像;In the case that both the first ray and the second ray of a certain frame of depth image of the current shooting position pass through the grid corresponding to the current shooting position, the frame of depth image is determined as the candidate image of the current shooting position ;根据各候选图像的清晰程度,在所述各候选图像中确定所述当前拍摄位置的关键帧图像。According to the clarity of each candidate image, a key frame image of the current shooting position is determined in each candidate image.7.根据权利要求6所述的采集方法,其中,所述根据各候选图像的清晰程度,在所述各候选图像中确定所述当前拍摄位置的关键帧图像包括:7. The acquisition method according to claim 6, wherein, according to the clarity of each candidate image, determining the key frame image of the current shooting position in each candidate image comprises:利用边缘检测算子,计算所述各候选图像的边缘特征信息;Using an edge detection operator, calculate the edge feature information of each candidate image;根据各边缘特征信息的统计特征,确定所述各候选图像的清晰度。The sharpness of each candidate image is determined according to the statistical features of each edge feature information.8.根据权利要求6所述的采集方法,其中,所述在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像包括:8. The acquisition method according to claim 6, wherein the determining of the key frame images of the target object at the corresponding shooting positions in the multi-frame depth images of the shooting positions comprises:在存在清晰度大于阈值的候选图像的情况下,将相应的各候选图像确定为各关键帧图像,以所述图像获取设备的下一拍摄位置为当前拍摄位置,在该当前拍摄位置的多帧深度图像中确定相应的关键帧图像;In the case where there are candidate images with a definition greater than the threshold, each corresponding candidate image is determined as each key frame image, and the next shooting position of the image acquisition device is the current shooting position, and the multiple frames at the current shooting position are Determine the corresponding key frame image in the depth image;在不存在清晰度大于阈值的候选图像的情况下,在当前拍摄位置再次获取多帧深度图像,直到存在清晰度大于阈值的候选图像。If there is no candidate image with a sharpness greater than the threshold, multiple frames of depth images are acquired again at the current shooting position until there is a candidate image with a sharpness greater than the threshold.9.根据权利要求5所述的采集方法,其中,9. The collection method according to claim 5, wherein,所述图像获取设备在世界坐标系的位置和拍摄方向根据所述图像获取设备在所述当前拍摄位置的外部参数计算。The position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.10.根据权利要求1-9任一项所述的采集方法,还包括:10. The collection method according to any one of claims 1-9, further comprising:响应于图像获取设备在所述当前拍摄位置获取多帧深度图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第一标记处理;in response to the image acquisition device acquiring multiple frames of depth images at the current shooting position, performing a first marking process on the corresponding grid of the current shooting position on the three-dimensional model;响应于确定了所述当前拍摄位置的关键帧图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第二标记处理,用于标识所述当前拍摄位置的关键帧图像采集完毕。In response to determining the key frame image of the current shooting position, a second marking process is performed on the corresponding grid of the current shooting position on the three-dimensional model, and the key frame image for identifying the current shooting position is collected. .11.根据权利要求1-9任一项所述的采集方法,还包括:11. The collection method according to any one of claims 1-9, further comprising:根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The three-dimensional reconstruction of the target object is performed according to the key frame images of the target object at each shooting position.12.一种关键帧图像的三维重建方法,包括:12. A three-dimensional reconstruction method of a key frame image, comprising:利用权利要求1-10任一项所述的关键帧图像的采集方法,采集目标物体在各拍摄位置的关键帧图像;Utilize the method for collecting key frame images according to any one of claims 1-10 to collect key frame images of the target object at each shooting position;根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The three-dimensional reconstruction of the target object is performed according to the key frame images of the target object at each shooting position.13.一种关键帧图像的采集装置,包括:13. A device for collecting key frame images, comprising:点云确定单元,用于根据图像获取设备在当前拍摄位置获取的目标物体的多帧深度图像和所述图像获取设备的内部参数,确定所述目标物体的点云数据;a point cloud determination unit, configured to determine the point cloud data of the target object according to the multi-frame depth images of the target object acquired by the image acquisition device at the current shooting position and the internal parameters of the image acquisition device;建立单元,用于根据所述目标物体的点云数据的位置信息、所述图像获取设备的内部参数和在当前拍摄位置的外部参数,建立能够包围所述目标物体的三维模型,所述三维模型的表面被划分为多个网格,每一个网格对应一个拍摄位置;A building unit for establishing a three-dimensional model capable of surrounding the target object according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device and the external parameters at the current shooting position, the three-dimensional model The surface is divided into multiple grids, each grid corresponds to a shooting position;关键帧确定单元,用于根据各拍摄位置对应的网格、在各拍摄位置获取的多帧深度图像的拍摄位置和拍摄方向,在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像。A key frame determination unit, configured to determine the target object in the multi-frame depth images of each shooting position according to the grid corresponding to each shooting position, the shooting position and shooting direction of the multi-frame depth images obtained at each shooting position Keyframe images at the corresponding shooting locations.14.根据权利要求13所述的采集装置,还包括:14. The collection device of claim 13, further comprising:标记单元,用于响应于图像获取设备在所述当前拍摄位置获取多帧深度图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第一标记处理,响应于确定了所述当前拍摄位置的关键帧图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第二标记处理,用于标识所述当前拍摄位置的关键帧图像采集完毕。a marking unit, configured to perform a first marking process on the corresponding grid of the current shooting position on the three-dimensional model in response to the image acquisition device acquiring multiple frames of depth images at the current shooting position, and in response to determining the For the key frame image of the current shooting position, a second marking process is performed on the corresponding grid of the current shooting position on the three-dimensional model, and the key frame image for identifying the current shooting position is collected.15.根据权利要求13所述的采集装置,还包括:15. The collection device of claim 13, further comprising:重建单元,用于根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The reconstruction unit is configured to perform three-dimensional reconstruction of the target object according to the key frame images of the target object at each shooting position.16.一种关键帧图像的三维重建装置,包括:16. A three-dimensional reconstruction device of a key frame image, comprising:采集单元,用于执行权利要求1-10任一项所述的关键帧图像的采集方法,采集目标物体在各拍摄位置的关键帧图像;a collection unit, configured to execute the method for collecting key frame images according to any one of claims 1-10, and collect key frame images of the target object at each shooting position;重建单元,用于根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The reconstruction unit is configured to perform three-dimensional reconstruction of the target object according to the key frame images of the target object at each shooting position.17.一种电子设备,包括:17. An electronic device comprising:存储器;和memory; and耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行权利要求1-11任一项所述的关键帧图像的采集方法,或者权利要求12所述的关键帧图像的三维重建方法。A processor coupled to the memory, the processor configured to execute the method for acquiring a key frame image of any one of claims 1-11, or claim 12, based on instructions stored in the memory The three-dimensional reconstruction method of the key frame image.18.一种关键帧图像的采集系统,包括:18. A system for capturing key frame images, comprising:关键帧图像的采集装置,用于执行权利要求1-11任一项所述的关键帧图像的采集方法;A device for collecting key frame images, configured to execute the method for collecting key frame images according to any one of claims 1-11;图像获取设备,用于在不同拍摄位置获取目标物体的多帧深度图像。The image acquisition device is used for acquiring multiple frames of depth images of the target object at different shooting positions.19.一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-11任一项所述的关键帧图像的采集方法,或者权利要求12所述的关键帧图像的三维重建方法。19. A non-volatile computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method for acquiring a key frame image according to any one of claims 1-11, or claims The three-dimensional reconstruction method of the key frame image described in 12.
CN202011291713.XA2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction methodActiveCN112348958B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202011291713.XACN112348958B (en)2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction method
PCT/CN2021/119860WO2022105415A1 (en)2020-11-182021-09-23Method, apparatus and system for acquiring key frame image, and three-dimensional reconstruction method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011291713.XACN112348958B (en)2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction method

Publications (2)

Publication NumberPublication Date
CN112348958Atrue CN112348958A (en)2021-02-09
CN112348958B CN112348958B (en)2025-02-21

Family

ID=74364181

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011291713.XAActiveCN112348958B (en)2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction method

Country Status (2)

CountryLink
CN (1)CN112348958B (en)
WO (1)WO2022105415A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113542868A (en)*2021-05-262021-10-22浙江大华技术股份有限公司Video key frame selection method and device, electronic equipment and storage medium
CN114267155A (en)*2021-11-052022-04-01国能大渡河革什扎水电开发有限公司 A Geological Hazard Monitoring and Early Warning System Based on Video Recognition Technology
CN114549781A (en)*2022-02-212022-05-27脸萌有限公司Data processing method and device, electronic equipment and storage medium
WO2022105415A1 (en)*2020-11-182022-05-27北京沃东天骏信息技术有限公司Method, apparatus and system for acquiring key frame image, and three-dimensional reconstruction method
WO2024230463A1 (en)*2023-05-112024-11-14北京沃东天骏信息技术有限公司Image processing method, apparatus and system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115289974B (en)*2022-10-092023-01-31思看科技(杭州)股份有限公司Hole site measuring method, hole site measuring device, computer equipment and storage medium
CN115713507B (en)*2022-11-162024-10-22华中科技大学Digital twinning-based concrete 3D printing forming quality detection method and device
CN116626040A (en)*2022-12-072023-08-22浙江众合科技股份有限公司 A Lesion Detection Method for Tracks and Tunnels
CN118967954B (en)*2024-10-172025-01-07福州市规划设计研究院集团有限公司Urban space three-dimensional reconstruction method and system based on big data

Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20020081019A1 (en)*1995-07-282002-06-27Tatsushi KatayamaImage sensing and image processing apparatuses
US20070031064A1 (en)*2004-06-102007-02-08Wenyi ZhaoMethod and apparatus for aligning video to three-dimensional point clouds
US20140002604A1 (en)*2011-03-312014-01-02Akio OhbaInformation processing apparatus, information processing method, and data structure of position information
CN104599314A (en)*2014-06-122015-05-06深圳奥比中光科技有限公司Three-dimensional model reconstruction method and system
JP2017118396A (en)*2015-12-252017-06-29Kddi株式会社 Program, apparatus and method for calculating internal parameters of depth camera
CN106910242A (en)*2017-01-232017-06-30中国科学院自动化研究所The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN107888828A (en)*2017-11-222018-04-06网易(杭州)网络有限公司Space-location method and device, electronic equipment and storage medium
CN108805979A (en)*2018-02-052018-11-13清华-伯克利深圳学院筹备办公室A kind of dynamic model three-dimensional rebuilding method, device, equipment and storage medium
CN109242961A (en)*2018-09-262019-01-18北京旷视科技有限公司A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN109872395A (en)*2019-01-242019-06-11中国医学科学院北京协和医院 An X-ray Image Simulation Method Based on Patch Model
CN111080771A (en)*2020-03-202020-04-28浙江华云电力工程设计咨询有限公司Information model construction method applied to three-dimensional intelligent aided design
CN111414796A (en)*2019-01-072020-07-14福特全球技术公司Adaptive transparency of virtual vehicles in analog imaging systems
CN111739146A (en)*2019-03-252020-10-02华为技术有限公司 Object 3D model reconstruction method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104992441B (en)*2015-07-082017-11-17华中科技大学A kind of real human body three-dimensional modeling method towards individualized virtual fitting
CN107170037A (en)*2016-03-072017-09-15深圳市鹰眼在线电子科技有限公司A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN107833253B (en)*2017-09-222020-08-04北京航空航天大学青岛研究院 A camera pose optimization method for RGBD 3D reconstruction texture generation
CN109544677B (en)*2018-10-302020-12-25山东大学Indoor scene main structure reconstruction method and system based on depth image key frame
CN112348958B (en)*2020-11-182025-02-21北京沃东天骏信息技术有限公司 Key frame image acquisition method, device, system and three-dimensional reconstruction method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20020081019A1 (en)*1995-07-282002-06-27Tatsushi KatayamaImage sensing and image processing apparatuses
US20070031064A1 (en)*2004-06-102007-02-08Wenyi ZhaoMethod and apparatus for aligning video to three-dimensional point clouds
US20140002604A1 (en)*2011-03-312014-01-02Akio OhbaInformation processing apparatus, information processing method, and data structure of position information
CN104599314A (en)*2014-06-122015-05-06深圳奥比中光科技有限公司Three-dimensional model reconstruction method and system
WO2015188684A1 (en)*2014-06-122015-12-17深圳奥比中光科技有限公司Three-dimensional model reconstruction method and system
JP2017118396A (en)*2015-12-252017-06-29Kddi株式会社 Program, apparatus and method for calculating internal parameters of depth camera
CN106910242A (en)*2017-01-232017-06-30中国科学院自动化研究所The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN107888828A (en)*2017-11-222018-04-06网易(杭州)网络有限公司Space-location method and device, electronic equipment and storage medium
CN108805979A (en)*2018-02-052018-11-13清华-伯克利深圳学院筹备办公室A kind of dynamic model three-dimensional rebuilding method, device, equipment and storage medium
CN109242961A (en)*2018-09-262019-01-18北京旷视科技有限公司A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN111414796A (en)*2019-01-072020-07-14福特全球技术公司Adaptive transparency of virtual vehicles in analog imaging systems
CN109872395A (en)*2019-01-242019-06-11中国医学科学院北京协和医院 An X-ray Image Simulation Method Based on Patch Model
CN111739146A (en)*2019-03-252020-10-02华为技术有限公司 Object 3D model reconstruction method and device
CN111080771A (en)*2020-03-202020-04-28浙江华云电力工程设计咨询有限公司Information model construction method applied to three-dimensional intelligent aided design

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
信寄遥;陈成军;李东年;: "基于RGB-D相机的多视角机械零件三维重建", 计算技术与自动化, no. 03, 28 September 2020 (2020-09-28)*

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2022105415A1 (en)*2020-11-182022-05-27北京沃东天骏信息技术有限公司Method, apparatus and system for acquiring key frame image, and three-dimensional reconstruction method
CN113542868A (en)*2021-05-262021-10-22浙江大华技术股份有限公司Video key frame selection method and device, electronic equipment and storage medium
CN114267155A (en)*2021-11-052022-04-01国能大渡河革什扎水电开发有限公司 A Geological Hazard Monitoring and Early Warning System Based on Video Recognition Technology
CN114549781A (en)*2022-02-212022-05-27脸萌有限公司Data processing method and device, electronic equipment and storage medium
WO2023158376A3 (en)*2022-02-212023-10-12脸萌有限公司Data processing method, apparatus, electronic device, and storage medium
EP4468249A4 (en)*2022-02-212025-05-21Lemon Inc. DATA PROCESSING METHOD, DEVICE, ELECTRONIC DEVICE AND STORAGE MEDIUM
WO2024230463A1 (en)*2023-05-112024-11-14北京沃东天骏信息技术有限公司Image processing method, apparatus and system

Also Published As

Publication numberPublication date
WO2022105415A1 (en)2022-05-27
CN112348958B (en)2025-02-21

Similar Documents

PublicationPublication DateTitle
CN112348958B (en) Key frame image acquisition method, device, system and three-dimensional reconstruction method
JP6560480B2 (en) Image processing system, image processing method, and program
CN109035334B (en)Pose determining method and device, storage medium and electronic device
CN107155341B (en)Three-dimensional scanning system and frame
CN107169924B (en) Method and system for establishing three-dimensional panoramic image
JP5093053B2 (en) Electronic camera
US20100194679A1 (en)Gesture recognition system and method thereof
US11620730B2 (en)Method for merging multiple images and post-processing of panorama
Santos et al.3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
JP2015201850A (en) Parameter estimation method for imaging apparatus
WO2021136386A1 (en)Data processing method, terminal, and server
CN111192308B (en)Image processing method and device, electronic equipment and computer storage medium
JP2010287174A (en) Furniture simulation method, apparatus, program, and recording medium
CN107194985A (en)A kind of three-dimensional visualization method and device towards large scene
KR102551713B1 (en)Electronic apparatus and image processing method thereof
CN104700355A (en)Generation method, device and system for indoor two-dimension plan
CN108958469A (en)A method of hyperlink is increased in virtual world based on augmented reality
WO2021035627A1 (en)Depth map acquisition method and device, and computer storage medium
CN113379899A (en)Automatic extraction method for regional images of construction engineering working face
CN107543507A (en)The determination method and device of screen profile
CN116912331B (en)Calibration data generation method and device, electronic equipment and storage medium
KR102146839B1 (en)System and method for building real-time virtual reality
CN109064533B (en)3D roaming method and system
CN112634377B (en)Camera calibration method, terminal and computer readable storage medium of sweeping robot
CN113034345B (en)Face recognition method and system based on SFM reconstruction

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp