Movatterモバイル変換


[0]ホーム

URL:


CN112348958B - Key frame image acquisition method, device, system and three-dimensional reconstruction method - Google Patents

Key frame image acquisition method, device, system and three-dimensional reconstruction method
Download PDF

Info

Publication number
CN112348958B
CN112348958BCN202011291713.XACN202011291713ACN112348958BCN 112348958 BCN112348958 BCN 112348958BCN 202011291713 ACN202011291713 ACN 202011291713ACN 112348958 BCN112348958 BCN 112348958B
Authority
CN
China
Prior art keywords
key frame
shooting position
target object
image
current shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011291713.XA
Other languages
Chinese (zh)
Other versions
CN112348958A (en
Inventor
杜峰
严庆安
刘享军
郭复胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co LtdfiledCriticalBeijing Jingdong Century Trading Co Ltd
Priority to CN202011291713.XApriorityCriticalpatent/CN112348958B/en
Publication of CN112348958ApublicationCriticalpatent/CN112348958A/en
Priority to PCT/CN2021/119860prioritypatent/WO2022105415A1/en
Application grantedgrantedCritical
Publication of CN112348958BpublicationCriticalpatent/CN112348958B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本公开涉及一种关键帧图像的采集方法、装置、系统和三维重建方法,涉及计算机技术领域。该采集方法包括:根据图像获取设备在当前拍摄位置获取的目标物体的多帧深度图像和图像获取设备的内部参数,确定目标物体的点云数据;根据目标物体的点云数据的位置信息、图像获取设备的内部参数和在当前拍摄位置的外部参数,建立能够包围目标物体的三维模型,三维模型的表面被划分为多个网格,每一个网格对应一个拍摄位置;根据各拍摄位置对应的网格、在各拍摄位置获取的多帧深度图像的拍摄位置和拍摄方向,在各拍摄位置的多帧深度图像中确定目标物体在相应的拍摄位置的关键帧图像。

The present disclosure relates to a key frame image acquisition method, device, system and three-dimensional reconstruction method, and relates to the field of computer technology. The acquisition method includes: determining the point cloud data of the target object according to the multi-frame depth images of the target object acquired by the image acquisition device at the current shooting position and the internal parameters of the image acquisition device; establishing a three-dimensional model that can surround the target object according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device and the external parameters at the current shooting position, and the surface of the three-dimensional model is divided into multiple grids, each grid corresponding to a shooting position; determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth images of each shooting position according to the grids corresponding to each shooting position, the shooting position and shooting direction of the multi-frame depth images acquired at each shooting position.

Description

Method, device and system for acquiring key frame image and three-dimensional reconstruction method
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method for acquiring a key frame image, a device for acquiring a key frame image, a system for acquiring a key frame image, a method for reconstructing a key frame image in three dimensions, a device for reconstructing a key frame image in three dimensions, and a non-volatile computer readable storage medium.
Background
Three-dimensional reconstruction techniques are a research hotspot in the fields of computer graphics and computer vision. Before the advent of consumer-level depth cameras, three-dimensionally reconstructed input data typically had only RGB (Red Green Blue) images. And recovering a three-dimensional model of the object through a stereoscopic vision algorithm based on a series of RGB images shot at different angles. These RGB images are key frames on which three-dimensional reconstruction depends. In order to accurately and completely reconstruct the 3D model of the object, the angles covered by the keyframes must be full and the keyframes must not be too blurred.
With the advent of various consumer-level depth cameras, three-dimensional reconstruction techniques based on depth cameras have been rapidly developed. By means of the depth image acquired by the depth camera, the three-dimensional model of the object can be quickly restored. In order to restore the realism of the three-dimensional model, it is also necessary to restore the real texture of the object by texture mapping. Texture mapping relies on a series of keyframes covering the angles of the three-dimensional model.
Therefore, regardless of the three-dimensional reconstruction technique used, it is necessary to cover a key frame that is comprehensive in angle and sufficiently clear to obtain a high-quality three-dimensional model.
In the related art, time sampling can be performed according to the time interval and the blurring degree of each frame image so as to select each angle key frame image, and a plurality of cameras can be erected around a target object so as to acquire each angle key frame image.
Disclosure of Invention
The inventor of the present disclosure has found that the above related art has a problem in that a photographing angle covered by a key frame image is not comprehensive, resulting in a reduction in quality of key frame image acquisition, or in that a cost of acquiring a key frame image of a complete angle is excessively high.
In view of this, the disclosure proposes a key frame image acquisition technical scheme, which can acquire a key frame image of a complete shooting angle without increasing cost, thereby improving the acquisition quality of the key frame image.
According to some embodiments of the present disclosure, a method for acquiring a key frame image is provided, which includes determining point cloud data of a target object according to a multi-frame depth image of the target object acquired by an image acquisition device at a current photographing position and internal parameters of the image acquisition device, establishing a three-dimensional model capable of surrounding the target object according to position information of the point cloud data of the target object, the internal parameters of the image acquisition device and external parameters at the current photographing position, dividing a surface of the three-dimensional model into a plurality of grids, each grid corresponding to one photographing position, and determining a key frame image of the target object at the corresponding photographing position in the multi-frame depth image of each photographing position according to the grids corresponding to each photographing position, the photographing position and the photographing direction of the multi-frame depth image acquired at each photographing position.
In some embodiments, establishing the three-dimensional model capable of surrounding the target object comprises calculating a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system according to internal parameters of the image acquisition device, and establishing the three-dimensional model according to the projection matrix, position information of vertexes of each grid and external parameters of shooting positions corresponding to each grid by the image acquisition device.
In some embodiments, building a three-dimensional model capable of surrounding the target object includes performing a plane fit based on point cloud data of the target object, determining a plane in which the target object is located, and building the three-dimensional model based on a position of the target object on the plane.
In some embodiments, building a three-dimensional model based on the position of the target object on the planar model includes determining a center position based on the position of the target object on the planar model, and building a three-dimensional hemispherical model as a three-dimensional model on the planar model based on the center position.
In some embodiments, determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth image of each shooting position comprises determining the key frame image of the current shooting position according to a first ray, a second ray and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, wherein the second ray is a ray pointing to the central point of the plane of the target object by taking the position of the image acquisition device in the world coordinate system as a starting point, and the first ray is pointing to the shooting direction of the image acquisition device in the current shooting position.
In some embodiments, determining the key frame image of the current shooting position according to the first ray, the second ray and the grid position corresponding to the multi-frame depth image of the current shooting position comprises determining the frame depth image as a candidate image of the current shooting position under the condition that the first ray and the second ray of a certain frame depth image of the current shooting position pass through the grid corresponding to the current shooting position, and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, determining the key frame image of the current photographing position in each candidate image according to the sharpness of each candidate image includes calculating edge feature information of each candidate image using an edge detection operator, and determining the sharpness of each candidate image according to the statistical features of each edge feature information.
In some embodiments, determining the key frame image of the target object at the corresponding shooting location in the multi-frame depth image at each shooting location includes determining each corresponding candidate image as each key frame image when there is a candidate image with a sharpness greater than a threshold, determining the corresponding key frame image in the multi-frame depth image at the current shooting location with the next shooting location of the image acquisition device as the current shooting location, and re-acquiring the multi-frame depth image at the current shooting location until there is a candidate image with a sharpness greater than the threshold when there is no candidate image with a sharpness greater than the threshold.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, the acquisition method further comprises the steps of responding to the image acquisition device to acquire multi-frame depth images at the current shooting position, performing first marking processing on corresponding grids of the current shooting position on the three-dimensional model, responding to the key frame images of the current shooting position, and performing second marking processing on the corresponding grids of the current shooting position on the three-dimensional model, wherein the second marking processing is used for marking that the key frame images of the current shooting position are acquired.
In some embodiments, the acquisition method further comprises three-dimensional reconstruction of the target object according to the key frame images of the target object at each shooting position.
According to other embodiments of the present disclosure, a three-dimensional reconstruction method of a key frame image is provided, which includes acquiring a key frame image of a target object at each shooting position by using the method for acquiring a key frame image of any one of the embodiments, and performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still other embodiments of the present disclosure, there is provided a key frame image acquisition apparatus including a point cloud determining unit configured to determine point cloud data of a target object based on a multi-frame depth image of the target object acquired by an image acquiring device at a current photographing position and an internal parameter of the image acquiring device, a creating unit configured to create a three-dimensional model capable of surrounding the target object based on position information of the point cloud data of the target object, the internal parameter of the image acquiring device, and an external parameter at the current photographing position, a surface of the three-dimensional model being divided into a plurality of grids, each grid corresponding to one photographing position, and a key frame determining unit configured to determine a key frame image of the target object at the corresponding photographing position in the multi-frame depth image of each photographing position based on the grids corresponding to each photographing position, the photographing position of the multi-frame depth image acquired at each photographing position, and the photographing direction.
In some embodiments, the establishing unit calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquisition device, and establishes the three-dimensional model according to the projection matrix, the position information of the vertexes of each grid and the external parameters of the shooting positions of the image acquisition device corresponding to each grid.
In some embodiments, the establishing unit performs plane fitting according to the point cloud data of the target object, determines the plane where the target object is located, and establishes the three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishing unit determines the center position according to the position of the target object on the plane model, and establishes a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the center position.
In some embodiments, the key frame determining unit determines the key frame image of the current shooting position according to a first ray, a second ray, and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, where the second ray is a ray pointing to a center point of a plane where the three-dimensional model is located, with a position of the image obtaining device in a world coordinate system as a starting point, and the first ray is pointing to a shooting direction of the image obtaining device in the current shooting position.
In some embodiments, the key frame determining unit determines a certain frame depth image of the current shooting position as a candidate image of the current shooting position under the condition that the first ray and the second ray of the frame depth image pass through a grid corresponding to the current shooting position, and determines the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, the key frame determining unit calculates edge feature information of each candidate image using an edge detection operator, and determines sharpness of each candidate image based on statistical features of each edge feature information.
In some embodiments, the key frame determining unit determines each corresponding candidate image as each key frame image when the candidate image with the definition being greater than the threshold value exists, determines each corresponding key frame image from among the multiple frame depth images at the current shooting position by taking the next shooting position of the image acquiring device as the current shooting position, and acquires the multiple frame depth images again at the current shooting position until the candidate image with the definition being greater than the threshold value exists when the candidate image with the definition being greater than the threshold value does not exist.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, the acquisition device further comprises a marking unit, which is used for responding to the image acquisition device to acquire multi-frame depth images at the current shooting position, performing first marking processing on the corresponding grid of the current shooting position on the three-dimensional model, and performing second marking processing on the corresponding grid of the current shooting position on the three-dimensional model to identify that the key frame image of the current shooting position is acquired.
In some embodiments, the acquisition device further comprises a reconstruction unit, configured to reconstruct the target object in three dimensions according to the keyframe images of the target object at each shooting position.
According to still other embodiments of the present disclosure, a three-dimensional reconstruction device for a key frame image is provided, which includes an acquisition unit configured to perform the method for acquiring a key frame image of any one of the above embodiments, and acquire a key frame image of a target object at each photographing position, and a reconstruction unit configured to perform three-dimensional reconstruction on the target object according to the key frame image of the target object at each photographing position.
According to still further embodiments of the present disclosure, there is provided an electronic device comprising a memory, and a processor coupled to the memory, the processor being configured to perform the method of acquisition of a key frame image or the method of three-dimensional reconstruction of a key frame image in any of the embodiments described above based on instructions stored in the memory means.
According to still further embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of acquiring a key frame image or the method of three-dimensional reconstruction of a key frame image in any of the above embodiments.
According to still other embodiments of the present disclosure, there is provided a key frame image acquisition system including a key frame image acquisition device configured to perform the key frame image acquisition method in any one of the above embodiments, and an image acquisition device configured to acquire multiple frame depth images of a target object at different shooting positions.
In the above embodiment, a three-dimensional model capable of surrounding the target object is established, and the acquisition of the key frame image is guided according to the grids corresponding to the shooting positions on the surface of the three-dimensional model. Therefore, the acquisition direction of the key frame images can be synchronously guided when the object is scanned, so that the acquired key frames can completely cover all shooting angles, and the acquisition quality of the key frame images is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a flow chart of some embodiments of a method of acquisition of a key frame image of the present disclosure;
FIG. 2 illustrates a flow chart of some embodiments of step 130 of FIG. 1;
FIG. 3 illustrates a schematic diagram of some embodiments of a method of acquisition of a keyframe image of the present disclosure;
FIG. 4 illustrates a flow chart of some embodiments of a method of three-dimensional reconstruction of a keyframe image of the present disclosure;
FIG. 5 illustrates a block diagram of some embodiments of a key frame image acquisition device of the present disclosure;
FIG. 6 illustrates a block diagram of some embodiments of a three-dimensional reconstruction apparatus of a key frame image of the present disclosure;
FIG. 7 illustrates a block diagram of some embodiments of an electronic device of the present disclosure;
FIG. 8 illustrates a block diagram of further embodiments of an electronic device of the present disclosure;
fig. 9 illustrates a block diagram of some embodiments of an acquisition system of a key frame image of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the authorization specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In view of the above technical problems, in an embodiment of the present disclosure, a user may scan a mobile phone or a tablet computer carrying a depth camera around a target object by hand for one week, and collect a certain number of depth images at each shooting position.
In this way, the collected depth image can cover all angles of the object, and a sufficiently clear depth image can be selected as a key frame image in a certain shooting angle range, so that data redundancy is reduced.
In addition, in the acquisition process, the key frame images can be synchronously determined from the depth images acquired from all shooting positions, so that the efficiency of key frame image acquisition is improved, visual feedback is also given to a user, the user can immediately know whether the corresponding shooting positions and angles have acquired the key frame images, and therefore the key frame images can cover the complete angle range.
In addition, the technical scheme of the present disclosure does not need to erect a camera array, and reduces hardware cost. For example, the technical solution of the present disclosure may be implemented by the following embodiments.
Fig. 1 illustrates a flow chart of some embodiments of a method of acquisition of a key frame image of the present disclosure.
As shown in FIG. 1, the method includes determining point cloud data of a target object, step 120, building a three-dimensional model surrounding the target object, and step 130, determining a keyframe image.
In step 110, point cloud data of the target object is determined according to the multi-frame depth image of the target object acquired by the image acquisition device at the current shooting position and internal parameters of the image acquisition device. For example, multiple frames of depth images may be acquired at the current photographing position, or the orientation of the image acquisition device may be adjusted so that multiple frames of depth images are acquired from different photographing directions.
In some embodiments, an object to be scanned, i.e., a target object, may be placed on a suitable plane (e.g., a desktop, a ground, etc.), and a user holds a device such as a mobile phone or a tablet computer with a depth camera to align to the target object, so as to collect corresponding RGB-D (Red Green Blue-Deep) data, and calculate point cloud data required for three-dimensional reconstruction according to the collected RGB-D data.
For example, the image acquisition device is a depth camera, and the internal parameter matrix is:
fx and fy are focal lengths of the depth camera, and cx and cy are principal point coordinates of the depth camera. The internal parameter matrix does not change with the position movement of the depth camera.
The depth value at the point (u, v) on a depth image of a certain frame is d, and the point cloud coordinate corresponding to the point can be determined as follows:
In step 120, a three-dimensional model capable of surrounding the target object is created based on the position information of the point cloud data of the target object, the internal parameters of the image acquisition apparatus, and the external parameters at the current photographing position. The surface of the three-dimensional model is divided into a plurality of grids, each corresponding to a photographing position.
In some embodiments, after obtaining the point cloud data of each frame of depth image of the current shooting position, the pose matrix (i.e., the external parameters of the camera) corresponding to each frame of depth image may be solved by ICP (ITERATIVE CLOSEST POINT ) algorithm or the like:
I.e. the camera external parameters consist of a rotation matrix R and a translation column vector T:
the camera external parameters are related to the shooting position.
In some embodiments, performing plane fitting according to point cloud data of the target object, determining a plane where the target object is located, and building a three-dimensional model according to the position of the target object on the plane. For example, the position of the center of a circle is determined according to the position of the target object on the plane model, and a three-dimensional hemispherical model is established on the plane model according to the position of the center of a circle to serve as a three-dimensional model.
For example, after acquiring enough point cloud data, an algorithm such as RANSAC (RANdom SAmple Consensus ) may be used to perform a plane fitting process.
Firstly, three points can be randomly selected from point cloud data, a candidate plane is determined according to the three points, then, the distance between all other points and the candidate plane is calculated, if the distance is smaller than a distance threshold value, the corresponding point is considered to be located in the candidate plane and marked as an inner point, otherwise, the corresponding point is marked as an outlier, if the number of the inner points exceeds a number threshold value, the candidate plane is saved as a plane model, and if the number of the inner points does not exceed the number threshold value, the three points are selected again to determine the candidate plane.
For example, after a plane in which the target object is placed is obtained, the three-dimensional hemispherical model may be rendered to a corresponding location on the plane (e.g., efficiently rendered using openGL) so that the three-dimensional hemispherical model can enclose the target object therein.
In order to ensure that the rendered three-dimensional hemispherical model is consistent with a physical space (namely a world coordinate system), namely, the position of the three-dimensional hemispherical model in the world coordinate system is not changed along with the position of a camera, a three-dimensional hemispherical model can be established by using a real-time projection matrix and a gesture matrix of the current shooting position.
In some embodiments, a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system is calculated according to internal parameters of the image acquisition device, and the three-dimensional model is built according to the projection matrix, the position information of the vertexes of each grid and external parameters of the image acquisition device at shooting positions corresponding to each grid.
For example, according to the internal parameter matrix of the depth camera, a projection matrix P corresponding to the current shooting position is calculated:
width and height are the width and height, respectively, of the depth image, far is the current far plane of the depth camera, and near is the current near plane.
Multiple sets of vertices on the three-dimensional model surface may be provided, each set of vertices defining a mesh. When the coordinates of a certain group of vertices are M, gl_position=p×v×m is rendered. And according to the gl_position values corresponding to the vertexes of each group, a three-dimensional model with the surface divided into a plurality of grids can be established. The grids corresponding to different shooting positions are used as a part of a three-dimensional model UI (User Interface), can be used as screening basis of key frame images, and can also guide acquisition of key frame images covering a complete angle range.
In step 130, a key frame image of the target object at the corresponding photographing position is determined from the multiple frames of depth images at each photographing position according to the grid corresponding to each photographing position, the photographing positions and the photographing directions of the multiple frames of depth images acquired at each photographing position.
In some embodiments, a key frame image may be determined according to the embodiment in fig. 2.
Fig. 2 shows a flow chart of some embodiments of step 130 in fig. 1.
As shown in fig. 2, step 130 includes determining a first ray and a second ray of a current photographing position, step 1320, determining a sharpness of each candidate image, step 1330, determining whether the sharpness is greater than a threshold, step 1340, acquiring a depth image again, step 1350, determining a key frame image, and step 1360, moving the image acquisition apparatus.
In step 1310, a first ray is determined from the shooting direction of the current shooting position with the position of the image acquisition device in the world coordinate system as a starting point, and a ray pointing to the center point of the three-dimensional model on the plane of the target object is determined as a second ray with the position of the image acquisition device in the world coordinate system as a starting point.
The first ray characterizes shooting direction information of the image acquisition equipment, and the second ray characterizes current position information of the image acquisition equipment, and can be used as a basis for screening key frame images.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, a key frame image of the current shooting position is determined according to a first ray, a second ray of a multi-frame depth image of the current shooting position and a grid position corresponding to the current shooting position. For example, candidate images for a key frame image may be determined by the embodiment in fig. 3.
Fig. 3 illustrates a schematic diagram of some embodiments of a method of acquisition of a key frame image of the present disclosure.
As shown in fig. 3, the target object is completely enclosed within the three-dimensional hemispherical model, with the first ray of the image acquisition device being l1 and the second ray being l2.
In some embodiments, the position pcamera of the depth camera in the world coordinate system and the camera orientation dcamera may be derived from the external parameters R and T:
The current position of the depth camera is taken as an endpoint, the direction dcamera of the depth camera is taken as a direction ray l1 to be taken as a direction ray, and the connection line po-pcamera of the depth camera and the sphere center po is taken as a direction ray l2 to be taken as a position ray. The key frame images can be screened according to grids corresponding to the current shooting position, the camera orientation and the direction of the connecting line of the camera and the sphere center.
In some embodiments, a frame depth image of a current shooting location is determined as a candidate image of the current shooting location if both the first ray and the second ray of the frame depth image pass through a grid corresponding to the current shooting location. For example, rays l1 and l2 in the graph both pass through a grid corresponding to the current shooting position, which indicates that the position ray and the orientation ray of the current depth camera are both aligned with the target object, and then the corresponding depth image is determined as a candidate image.
After the candidate images are determined, a key frame image of the current photographing position may be determined in each candidate image according to the sharpness of each candidate image through the remaining steps in fig. 2.
In step 1320, the sharpness of each candidate image is determined.
In some embodiments, edge feature information of each candidate image is calculated by using an edge detection operator (such as Laplace operator, sobel operator, canny operator, etc.), and the definition of each candidate image is determined according to the statistical feature of each edge feature information.
For example, a Laplace operator may be used for sharpness detection. The method comprises the steps of converting an image into a gray level image, processing by using a Laplace operator, and calculating the variance of the Laplace operator processing result to be used as the definition of the current image.
In step 1330, it is determined whether the sharpness of each candidate image is greater than a threshold. Step 1340 is performed in the absence of a candidate image having a sharpness greater than the threshold, and step 1350 is performed in the presence of a candidate image having a sharpness greater than the threshold.
In step 1340, a plurality of frames of depth images are acquired again at the current shooting position, and steps 1310 to 1330 are repeatedly performed until there are candidate images with sharpness greater than the threshold.
In step 1350, candidate images having a sharpness greater than the threshold are determined to be key frame images.
In step 1360, the image acquisition apparatus is moved, and a plurality of frames of depth images are acquired again (step 1340). And determining a corresponding key frame image in the multi-frame depth image of the current shooting position by taking the next shooting position of the image acquisition device as the current shooting position (repeatedly executing steps 1310-1360).
In some embodiments, a three-dimensional model rendered near the target object may be directed as a key acquisition UI at the image.
In some embodiments, in response to an image acquisition device acquiring a multi-frame depth image at a current shooting location, a first marking process is performed on a corresponding grid of the current shooting location on a three-dimensional model. For example, when the depth camera is aimed at a certain shooting angle, a corresponding grid may be illuminated (e.g., changed in color, etc.) to indicate the coverage angle of the currently acquired key frame image.
In some embodiments, in response to determining the key frame image of the current shooting location, performing a second marking process on the three-dimensional model on the corresponding grid of the current shooting location for identifying that the key frame image of the current shooting location is completely acquired.
For example, if the sharpness is greater than the threshold, the sharpness is saved as a key frame image, the color of the corresponding grid of the key frame image is changed, the current angle is indicated that the key frame image is acquired, and the camera can be moved to the next shooting position to continue acquisition. When the key frame images are acquired at all angles, all grids change colors, which means that the key frame images covering the complete angle range are acquired, and scanning can be completed according to requirements.
Fig. 4 illustrates a flow chart of some embodiments of a method of three-dimensional reconstruction of a key frame image of the present disclosure.
As shown in fig. 4, in step 410, the key frame image of the target object at each shooting position is acquired by using the method for acquiring the key frame image of any of the above embodiments.
In step 420, the target object is three-dimensionally reconstructed from the key frame images of the target object at each photographing position.
In this way, since the key frame image covering the complete angle range is acquired, the effect of three-dimensional reconstruction can be improved.
Fig. 5 illustrates a block diagram of some embodiments of a key frame image acquisition device of the present disclosure.
As shown in fig. 5, the acquisition device 5 of the key frame image includes a point cloud determination unit 51, a creation unit 52, and a key frame determination unit 53.
The point cloud determining unit 51 determines point cloud data of the target object based on the multi-frame depth image of the target object acquired by the image acquiring apparatus at the current photographing position and the internal parameters of the image acquiring apparatus.
The establishing unit 52 establishes a three-dimensional model capable of surrounding the target object based on the position information of the point cloud data of the target object, the internal parameters of the image acquisition apparatus, and the external parameters at the current photographing position. The surface of the three-dimensional model is divided into a plurality of grids, each corresponding to a photographing position.
The key frame determination unit 53 determines a key frame image of the target object at the corresponding photographing position from among the multiple frames of depth images at the respective photographing positions, based on the mesh corresponding to the respective photographing positions, the photographing positions of the multiple frames of depth images acquired at the respective photographing positions, and the photographing direction.
In some embodiments, the establishing unit 52 calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquiring device, and establishes the three-dimensional model according to the projection matrix, the position information of the vertexes of each grid and the external parameters of the shooting positions of the image acquiring device corresponding to each grid.
In some embodiments, the establishing unit 52 performs plane fitting according to the point cloud data of the target object, determines the plane in which the target object is located, and establishes the three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishing unit 52 determines the center position according to the position of the target object on the planar model, and establishes a three-dimensional hemispherical model on the planar model as a three-dimensional model according to the center position.
In some embodiments, the key frame determining unit 53 determines the key frame image of the current photographing position according to the first ray, the second ray, and the grid position corresponding to the current photographing position of the multi-frame depth image of the current photographing position. The second ray is a ray pointing to the center point of the three-dimensional model on the plane of the target object by taking the position of the image acquisition device in the world coordinate system as a starting point, and the first ray points to the shooting direction of the image acquisition device in the current shooting position.
In some embodiments, the key frame determining unit 53 determines a certain frame depth image of the current photographing position as a candidate image of the current photographing position if the first ray and the second ray of the frame depth image pass through the grid corresponding to the current photographing position, and determines a key frame image of the current photographing position in each candidate image according to the sharpness of each candidate image.
In some embodiments, the key frame determining unit 53 calculates edge feature information of each candidate image using an edge detection operator, and determines sharpness of each candidate image based on statistical features of each edge feature information.
In some embodiments, the key frame determining unit 53 determines, as each key frame image, a corresponding candidate image in the case where there is a candidate image having a sharpness greater than the threshold value, and determines, from among the multiple frame depth images at the current photographing position, a corresponding key frame image with a next photographing position of the image capturing apparatus as the current photographing position, and acquires, in the case where there is no candidate image having a sharpness greater than the threshold value, the multiple frame depth images again at the current photographing position until there is a candidate image having a sharpness greater than the threshold value.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, the acquisition device 5 further comprises a marking unit 54, configured to perform a first marking process on the three-dimensional model for the corresponding grid of the current shooting position in response to the image acquisition device acquiring the multi-frame depth image at the current shooting position, and perform a second marking process on the three-dimensional model for the corresponding grid of the current shooting position in response to determining the key frame image of the current shooting position, wherein the second marking process is used for identifying that the key frame image of the current shooting position is acquired.
In some embodiments, the acquisition device 5 further comprises a reconstruction unit 55 for performing three-dimensional reconstruction of the target object based on the key frame images of the target object at each photographing position.
Fig. 6 illustrates a block diagram of some embodiments of a three-dimensional reconstruction apparatus of a key frame image of the present disclosure.
As shown in fig. 6, the three-dimensional reconstruction device 6 for a key frame image includes an acquisition unit 61 for performing the above-described key frame image acquisition method of any one of the embodiments, acquiring key frame images of a target object at each photographing position, and a reconstruction unit 62 for performing three-dimensional reconstruction of the target object based on the key frame images of the target object at each photographing position.
Fig. 7 illustrates a block diagram of some embodiments of an electronic device of the present disclosure.
As shown in fig. 7, the electronic device 7 of this embodiment includes a memory 71 and a processor 72 coupled to the memory 71, the processor 72 being configured to execute the acquisition method of the key frame image or the three-dimensional reconstruction method of the key frame image in any of the above embodiments based on instructions stored in the memory 71.
The memory 71 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, application programs, boot Loader, database, and other programs.
Fig. 8 shows a block diagram of further embodiments of the electronic device of the present disclosure.
As shown in fig. 8, the electronic device 8 of this embodiment includes a memory 810 and a processor 820 coupled to the memory 810, the processor 820 being configured to execute the acquisition method of the key frame image or the three-dimensional reconstruction method of the key frame image in any of the foregoing embodiments based on instructions stored in the memory 810.
Memory 810 may include, for example, system memory, fixed nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot Loader, and other programs.
The electronic device 8 may also include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and the processor 820 may be connected by, for example, a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen, a microphone, and a speaker. The network interface 840 provides a connection interface for various networking devices. Storage interface 850 provides a connection interface for external storage devices such as SD cards, U-discs, and the like.
Fig. 9 illustrates a block diagram of some embodiments of an acquisition system of a key frame image of the present disclosure.
As shown in fig. 9, the key frame image acquisition system 9 includes a key frame image acquisition device 91 for performing the key frame image acquisition method in any of the above embodiments, and an image acquisition apparatus 92 for acquiring multi-frame depth images of a target object at different photographing positions.
It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having computer-usable program code embodied therein.
Up to this point, the method of acquiring a key frame image, the apparatus of acquiring a key frame image, the system of acquiring a key frame image, the method of three-dimensional reconstruction of a key frame image, the apparatus of three-dimensional reconstruction of a key frame image, and the non-transitory computer readable storage medium according to the present disclosure have been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
The methods and systems of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (18)

Translated fromChinese
1.一种关键帧图像的采集方法,包括:1. A method for collecting key frame images, comprising:根据图像获取设备在当前拍摄位置获取的目标物体的多帧深度图像和所述图像获取设备的内部参数,确定所述目标物体的点云数据;Determine point cloud data of the target object according to multiple frames of depth images of the target object acquired by the image acquisition device at the current shooting position and internal parameters of the image acquisition device;根据所述目标物体的点云数据的位置信息、所述图像获取设备的内部参数和在当前拍摄位置的外部参数,建立能够包围所述目标物体的三维模型,所述三维模型的表面被划分为多个网格,每一个网格对应一个拍摄位置;Establishing a three-dimensional model that can surround the target object according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device and the external parameters at the current shooting position, wherein the surface of the three-dimensional model is divided into a plurality of grids, each grid corresponding to a shooting position;根据各拍摄位置对应的网格、在各拍摄位置获取的多帧深度图像的拍摄位置和拍摄方向,在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像,According to the grids corresponding to the shooting positions, the shooting positions and shooting directions of the multi-frame depth images acquired at the shooting positions, a key frame image of the target object at the corresponding shooting position is determined in the multi-frame depth images at the shooting positions,其中,所述在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像包括:Wherein, determining the key frame image of the target object at the corresponding shooting position from the multiple frames of depth images at the shooting positions includes:根据所述当前拍摄位置的多帧深度图像的第一射线、第二射线、所述当前拍摄位置对应的网格位置,确定所述当前拍摄位置的关键帧图像,所述第一射线指向所述图像获取设备在所述当前拍摄位置的拍摄方向,所述第二射线为以所述图像获取设备在世界坐标系的位置为起点,指向所述三维模型在所述目标物体所在平面的中心点的射线,所述关键帧图像的所述第一射线和所述第二射线均对准所述目标物体。A key frame image of the current shooting position is determined based on a first ray, a second ray, and a grid position corresponding to the current shooting position of the multi-frame depth images of the current shooting position, wherein the first ray points to a shooting direction of the image acquisition device at the current shooting position, and the second ray takes the position of the image acquisition device in the world coordinate system as a starting point and points to a center point of the three-dimensional model in the plane where the target object is located, and both the first ray and the second ray of the key frame image are aimed at the target object.2.根据权利要求1所述的采集方法,其中,所述建立能够包围所述目标物体的三维模型包括:2. The acquisition method according to claim 1, wherein the step of establishing a three-dimensional model capable of surrounding the target object comprises:根据所述图像获取设备的内部参数,计算所述三维模型从三维坐标系到二维坐标系的投影矩阵;Calculating a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system according to internal parameters of the image acquisition device;根据所述投影矩阵、各网格的顶点的位置信息、所述图像获取设备在所述各网格对应的拍摄位置的外部参数,建立所述三维模型。The three-dimensional model is established according to the projection matrix, the position information of the vertices of each mesh, and the external parameters of the image acquisition device at the shooting position corresponding to each mesh.3.根据权利要求1所述的采集方法,其中,所述建立能够包围所述目标物体的三维模型包括:3. The acquisition method according to claim 1, wherein the step of establishing a three-dimensional model capable of surrounding the target object comprises:根据所述目标物体的点云数据进行平面拟合,确定所述目标物体所在的平面;Performing plane fitting according to the point cloud data of the target object to determine the plane where the target object is located;根据所述目标物体在所述平面上的位置,建立所述三维模型。The three-dimensional model is established according to the position of the target object on the plane.4.根据权利要求3所述的采集方法,其中,所述根据所述目标物体在所述平面上的位置,建立所述三维模型包括:4. The acquisition method according to claim 3, wherein establishing the three-dimensional model according to the position of the target object on the plane comprises:所述根据所述目标物体在所述平面上的位置,确定圆心位置;Determining the position of the center of the circle according to the position of the target object on the plane;根据所述圆心位置,在所述平面上建立三维半球模型作为所述三维模型。According to the position of the circle center, a three-dimensional hemispherical model is established on the plane as the three-dimensional model.5.根据权利要求1所述的采集方法,其中,所述根据所述当前拍摄位置的多帧深度图像的第一射线、第二射线、所述当前拍摄位置对应的网格位置,确定所述当前拍摄位置的关键帧图像包括:5. The acquisition method according to claim 1, wherein the determining the key frame image of the current shooting position according to the first ray, the second ray of the multiple frames of depth images of the current shooting position, and the grid position corresponding to the current shooting position comprises:在所述当前拍摄位置的某帧深度图像的第一射线、第二射线均穿过所述当前拍摄位置对应的网格的情况下,将该帧深度图像确定为所述当前拍摄位置的候选图像;When the first ray and the second ray of a frame depth image at the current shooting position both pass through the grid corresponding to the current shooting position, determining the frame depth image as a candidate image at the current shooting position;根据各候选图像的清晰程度,在所述各候选图像中确定所述当前拍摄位置的关键帧图像。According to the clarity of each candidate image, a key frame image of the current shooting position is determined in each candidate image.6.根据权利要求5所述的采集方法,其中,所述根据各候选图像的清晰程度,在所述各候选图像中确定所述当前拍摄位置的关键帧图像包括:6. The acquisition method according to claim 5, wherein the step of determining the key frame image of the current shooting position from among the candidate images according to the clarity of the candidate images comprises:利用边缘检测算子,计算所述各候选图像的边缘特征信息;Calculating edge feature information of each candidate image using an edge detection operator;根据各边缘特征信息的统计特征,确定所述各候选图像的清晰度。The clarity of each candidate image is determined according to the statistical characteristics of each edge feature information.7.根据权利要求5所述的采集方法,其中,所述在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像包括:7. The acquisition method according to claim 5, wherein determining the key frame image of the target object at the corresponding shooting position in the multiple frames of depth images at the shooting positions comprises:在存在清晰度大于阈值的候选图像的情况下,将相应的各候选图像确定为各关键帧图像,以所述图像获取设备的下一拍摄位置为当前拍摄位置,在该当前拍摄位置的多帧深度图像中确定相应的关键帧图像;In the case where there are candidate images with a definition greater than a threshold, the corresponding candidate images are determined as key frame images, the next shooting position of the image acquisition device is taken as the current shooting position, and the corresponding key frame images are determined in the multiple depth images at the current shooting position;在不存在清晰度大于阈值的候选图像的情况下,在当前拍摄位置再次获取多帧深度图像,直到存在清晰度大于阈值的候选图像。When there is no candidate image with a definition greater than the threshold, multiple frames of depth images are acquired again at the current shooting position until there is a candidate image with a definition greater than the threshold.8.根据权利要求1所述的采集方法,其中,8. The collection method according to claim 1, wherein:所述图像获取设备在世界坐标系的位置和拍摄方向根据所述图像获取设备在所述当前拍摄位置的外部参数计算。The position and shooting direction of the image acquisition device in the world coordinate system are calculated according to the external parameters of the image acquisition device at the current shooting position.9.根据权利要求1-8任一项所述的采集方法,还包括:9. The collection method according to any one of claims 1 to 8, further comprising:响应于图像获取设备在所述当前拍摄位置获取多帧深度图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第一标记处理;In response to an image acquisition device acquiring a plurality of frames of depth images at the current shooting position, performing a first marking process on a corresponding grid of the current shooting position on the three-dimensional model;响应于确定了所述当前拍摄位置的关键帧图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第二标记处理,用于标识所述当前拍摄位置的关键帧图像采集完毕。In response to determining the key frame image of the current shooting position, a second marking process is performed on the corresponding grid of the current shooting position on the three-dimensional model to indicate that the key frame image of the current shooting position has been collected.10.根据权利要求1-8任一项所述的采集方法,还包括:10. The collection method according to any one of claims 1 to 8, further comprising:根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The target object is three-dimensionally reconstructed according to the key frame images of the target object at each shooting position.11.一种关键帧图像的三维重建方法,包括:11. A three-dimensional reconstruction method of a key frame image, comprising:利用权利要求1-9任一项所述的关键帧图像的采集方法,采集目标物体在各拍摄位置的关键帧图像;Using the key frame image acquisition method described in any one of claims 1 to 9, the key frame images of the target object at each shooting position are acquired;根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The target object is three-dimensionally reconstructed according to the key frame images of the target object at each shooting position.12.一种关键帧图像的采集装置,包括:12. A key frame image acquisition device, comprising:点云确定单元,用于根据图像获取设备在当前拍摄位置获取的目标物体的多帧深度图像和所述图像获取设备的内部参数,确定所述目标物体的点云数据;A point cloud determination unit, configured to determine point cloud data of the target object based on a plurality of depth images of the target object acquired by an image acquisition device at a current shooting position and internal parameters of the image acquisition device;建立单元,用于根据所述目标物体的点云数据的位置信息、所述图像获取设备的内部参数和在当前拍摄位置的外部参数,建立能够包围所述目标物体的三维模型,所述三维模型的表面被划分为多个网格,每一个网格对应一个拍摄位置;A building unit, configured to build a three-dimensional model capable of surrounding the target object according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device and the external parameters at the current shooting position, wherein the surface of the three-dimensional model is divided into a plurality of grids, each grid corresponding to a shooting position;关键帧确定单元,用于根据各拍摄位置对应的网格、在各拍摄位置获取的多帧深度图像的拍摄位置和拍摄方向,在所述各拍摄位置的多帧深度图像中确定所述目标物体在相应的拍摄位置的关键帧图像,a key frame determining unit, configured to determine a key frame image of the target object at a corresponding shooting position in the multi-frame depth image at each shooting position according to the grid corresponding to each shooting position, the shooting position and shooting direction of the multi-frame depth image acquired at each shooting position,其中,所述关键帧确定单元根据所述当前拍摄位置的多帧深度图像的第一射线、第二射线、所述当前拍摄位置对应的网格位置,确定所述当前拍摄位置的关键帧图像,所述第一射线指向所述图像获取设备在所述当前拍摄位置的拍摄方向,所述第二射线为以所述图像获取设备在世界坐标系的位置为起点,指向所述三维模型在所述目标物体所在平面的中心点的射线,所述关键帧图像的所述第一射线和所述第二射线均对准所述目标物体。Among them, the key frame determination unit determines the key frame image of the current shooting position based on the first ray, the second ray of the multi-frame depth image of the current shooting position, and the grid position corresponding to the current shooting position, the first ray points to the shooting direction of the image acquisition device at the current shooting position, the second ray is a ray starting from the position of the image acquisition device in the world coordinate system and pointing to the center point of the three-dimensional model in the plane where the target object is located, and the first ray and the second ray of the key frame image are both aimed at the target object.13.根据权利要求12所述的采集装置,还包括:13. The collection device according to claim 12, further comprising:标记单元,用于响应于图像获取设备在所述当前拍摄位置获取多帧深度图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第一标记处理,响应于确定了所述当前拍摄位置的关键帧图像,在所述三维模型上对所述当前拍摄位置的相应网格进行第二标记处理,用于标识所述当前拍摄位置的关键帧图像采集完毕。A marking unit is used for performing a first marking process on a corresponding grid of the current shooting position on the three-dimensional model in response to an image acquisition device acquiring multiple frames of depth images at the current shooting position; and in response to determining a key frame image of the current shooting position, performing a second marking process on the corresponding grid of the current shooting position on the three-dimensional model to indicate that acquisition of the key frame image of the current shooting position is complete.14.根据权利要求12所述的采集装置,还包括:14. The collection device according to claim 12, further comprising:重建单元,用于根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The reconstruction unit is used to perform three-dimensional reconstruction of the target object according to the key frame images of the target object at each shooting position.15.一种关键帧图像的三维重建装置,包括:15. A three-dimensional reconstruction device for a key frame image, comprising:采集单元,用于执行权利要求1-9任一项所述的关键帧图像的采集方法,采集目标物体在各拍摄位置的关键帧图像;A collection unit, used to execute the key frame image collection method according to any one of claims 1 to 9, and collect key frame images of the target object at each shooting position;重建单元,用于根据所述目标物体在各拍摄位置的关键帧图像,对所述目标物体进行三维重建。The reconstruction unit is used to perform three-dimensional reconstruction of the target object according to the key frame images of the target object at each shooting position.16.一种电子设备,包括:16. An electronic device, comprising:存储器;和Memory; and耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行权利要求1-10任一项所述的关键帧图像的采集方法,或者权利要求11所述的关键帧图像的三维重建方法。A processor coupled to the memory, the processor being configured to execute the key frame image acquisition method described in any one of claims 1 to 10, or the three-dimensional reconstruction method of the key frame image described in claim 11, based on instructions stored in the memory.17.一种关键帧图像的采集系统,包括:17. A key frame image acquisition system, comprising:关键帧图像的采集装置,用于执行权利要求1-10任一项所述的关键帧图像的采集方法;A key frame image acquisition device, used to execute the key frame image acquisition method described in any one of claims 1 to 10;图像获取设备,用于在不同拍摄位置获取目标物体的多帧深度图像。The image acquisition device is used to acquire multiple frames of depth images of a target object at different shooting positions.18.一种非易失性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-10任一项所述的关键帧图像的采集方法,或者权利要求11所述的关键帧图像的三维重建方法。18. A non-volatile computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the key frame image acquisition method described in any one of claims 1 to 10, or the three-dimensional reconstruction method of the key frame image described in claim 11.
CN202011291713.XA2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction methodActiveCN112348958B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202011291713.XACN112348958B (en)2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction method
PCT/CN2021/119860WO2022105415A1 (en)2020-11-182021-09-23Method, apparatus and system for acquiring key frame image, and three-dimensional reconstruction method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011291713.XACN112348958B (en)2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction method

Publications (2)

Publication NumberPublication Date
CN112348958A CN112348958A (en)2021-02-09
CN112348958Btrue CN112348958B (en)2025-02-21

Family

ID=74364181

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011291713.XAActiveCN112348958B (en)2020-11-182020-11-18 Key frame image acquisition method, device, system and three-dimensional reconstruction method

Country Status (2)

CountryLink
CN (1)CN112348958B (en)
WO (1)WO2022105415A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112348958B (en)*2020-11-182025-02-21北京沃东天骏信息技术有限公司 Key frame image acquisition method, device, system and three-dimensional reconstruction method
CN113542868A (en)*2021-05-262021-10-22浙江大华技术股份有限公司Video key frame selection method and device, electronic equipment and storage medium
CN114267155A (en)*2021-11-052022-04-01国能大渡河革什扎水电开发有限公司 A Geological Hazard Monitoring and Early Warning System Based on Video Recognition Technology
CN114549781A (en)*2022-02-212022-05-27脸萌有限公司Data processing method and device, electronic equipment and storage medium
CN115289974B (en)*2022-10-092023-01-31思看科技(杭州)股份有限公司Hole site measuring method, hole site measuring device, computer equipment and storage medium
CN115713507B (en)*2022-11-162024-10-22华中科技大学Digital twinning-based concrete 3D printing forming quality detection method and device
CN116626040A (en)*2022-12-072023-08-22浙江众合科技股份有限公司 A Lesion Detection Method for Tracks and Tunnels
CN116668662A (en)*2023-05-112023-08-29北京沃东天骏信息技术有限公司Image processing method, device and system
CN118967954B (en)*2024-10-172025-01-07福州市规划设计研究院集团有限公司Urban space three-dimensional reconstruction method and system based on big data

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104599314A (en)*2014-06-122015-05-06深圳奥比中光科技有限公司Three-dimensional model reconstruction method and system

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6640004B2 (en)*1995-07-282003-10-28Canon Kabushiki KaishaImage sensing and image processing apparatuses
US7187809B2 (en)*2004-06-102007-03-06Sarnoff CorporationMethod and apparatus for aligning video to three-dimensional point clouds
JP5781353B2 (en)*2011-03-312015-09-24株式会社ソニー・コンピュータエンタテインメント Information processing apparatus, information processing method, and data structure of position information
CN104992441B (en)*2015-07-082017-11-17华中科技大学A kind of real human body three-dimensional modeling method towards individualized virtual fitting
JP6465789B2 (en)*2015-12-252019-02-06Kddi株式会社 Program, apparatus and method for calculating internal parameters of depth camera
CN107170037A (en)*2016-03-072017-09-15深圳市鹰眼在线电子科技有限公司A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN106910242B (en)*2017-01-232020-02-28中国科学院自动化研究所 Method and system for 3D reconstruction of indoor complete scene based on depth camera
CN107833253B (en)*2017-09-222020-08-04北京航空航天大学青岛研究院 A camera pose optimization method for RGBD 3D reconstruction texture generation
CN107888828B (en)*2017-11-222020-02-21杭州易现先进科技有限公司Space positioning method and device, electronic device, and storage medium
CN108805979B (en)*2018-02-052021-06-29清华-伯克利深圳学院筹备办公室 A kind of dynamic model three-dimensional reconstruction method, apparatus, equipment and storage medium
CN109242961B (en)*2018-09-262021-08-10北京旷视科技有限公司Face modeling method and device, electronic equipment and computer readable medium
CN109544677B (en)*2018-10-302020-12-25山东大学Indoor scene main structure reconstruction method and system based on depth image key frame
US10896335B2 (en)*2019-01-072021-01-19Ford Global Technologies, LlcAdaptive transparency of virtual vehicle in simulated imaging system
CN109872395B (en)*2019-01-242023-06-02中国医学科学院北京协和医院X-ray image simulation method based on patch model
CN111739146B (en)*2019-03-252024-07-30华为技术有限公司 Method and device for reconstructing three-dimensional model of object
CN111080771B (en)*2020-03-202023-10-20浙江华云电力工程设计咨询有限公司Information model construction method applied to three-dimensional intelligent aided design
CN112348958B (en)*2020-11-182025-02-21北京沃东天骏信息技术有限公司 Key frame image acquisition method, device, system and three-dimensional reconstruction method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104599314A (en)*2014-06-122015-05-06深圳奥比中光科技有限公司Three-dimensional model reconstruction method and system

Also Published As

Publication numberPublication date
CN112348958A (en)2021-02-09
WO2022105415A1 (en)2022-05-27

Similar Documents

PublicationPublication DateTitle
CN112348958B (en) Key frame image acquisition method, device, system and three-dimensional reconstruction method
CN111243093B (en)Three-dimensional face grid generation method, device, equipment and storage medium
CN104574311B (en)Image processing method and device
WO2020001168A1 (en)Three-dimensional reconstruction method, apparatus, and device, and storage medium
CN107155341B (en)Three-dimensional scanning system and frame
CN109035334B (en)Pose determining method and device, storage medium and electronic device
US11620730B2 (en)Method for merging multiple images and post-processing of panorama
CN111598993A (en)Three-dimensional data reconstruction method and device based on multi-view imaging technology
Meuleman et al.Real-time sphere sweeping stereo from multiview fisheye images
WO2021136386A1 (en)Data processing method, terminal, and server
CN111080776B (en)Human body action three-dimensional data acquisition and reproduction processing method and system
CN111192308B (en)Image processing method and device, electronic equipment and computer storage medium
CN113379899B (en)Automatic extraction method for building engineering working face area image
WO2021173004A1 (en)Computer-generated image processing including volumetric scene reconstruction
WO2021035627A1 (en)Depth map acquisition method and device, and computer storage medium
CN108958469A (en)A method of hyperlink is increased in virtual world based on augmented reality
CN107543507A (en)The determination method and device of screen profile
CN116912331B (en)Calibration data generation method and device, electronic equipment and storage medium
CN109064533B (en)3D roaming method and system
CN113034345B (en)Face recognition method and system based on SFM reconstruction
CN116342831A (en)Three-dimensional scene reconstruction method, three-dimensional scene reconstruction device, computer equipment and storage medium
US20230107740A1 (en)Methods and systems for automated three-dimensional object detection and extraction
CN117173375A (en) An augmented reality presentation method of human skeleton images based on biomedical features
US11120606B1 (en)Systems and methods for image texture uniformization for multiview object capture
JP7265825B2 (en) Generation device, generation method and program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp