Disclosure of Invention
The inventor of the present disclosure has found that the above related art has a problem in that a photographing angle covered by a key frame image is not comprehensive, resulting in a reduction in quality of key frame image acquisition, or in that a cost of acquiring a key frame image of a complete angle is excessively high.
In view of this, the disclosure proposes a key frame image acquisition technical scheme, which can acquire a key frame image of a complete shooting angle without increasing cost, thereby improving the acquisition quality of the key frame image.
According to some embodiments of the present disclosure, a method for acquiring a key frame image is provided, which includes determining point cloud data of a target object according to a multi-frame depth image of the target object acquired by an image acquisition device at a current photographing position and internal parameters of the image acquisition device, establishing a three-dimensional model capable of surrounding the target object according to position information of the point cloud data of the target object, the internal parameters of the image acquisition device and external parameters at the current photographing position, dividing a surface of the three-dimensional model into a plurality of grids, each grid corresponding to one photographing position, and determining a key frame image of the target object at the corresponding photographing position in the multi-frame depth image of each photographing position according to the grids corresponding to each photographing position, the photographing position and the photographing direction of the multi-frame depth image acquired at each photographing position.
In some embodiments, establishing the three-dimensional model capable of surrounding the target object comprises calculating a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system according to internal parameters of the image acquisition device, and establishing the three-dimensional model according to the projection matrix, position information of vertexes of each grid and external parameters of shooting positions corresponding to each grid by the image acquisition device.
In some embodiments, building a three-dimensional model capable of surrounding the target object includes performing a plane fit based on point cloud data of the target object, determining a plane in which the target object is located, and building the three-dimensional model based on a position of the target object on the plane.
In some embodiments, building a three-dimensional model based on the position of the target object on the planar model includes determining a center position based on the position of the target object on the planar model, and building a three-dimensional hemispherical model as a three-dimensional model on the planar model based on the center position.
In some embodiments, determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth image of each shooting position comprises determining the key frame image of the current shooting position according to a first ray, a second ray and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, wherein the second ray is a ray pointing to the central point of the plane of the target object by taking the position of the image acquisition device in the world coordinate system as a starting point, and the first ray is pointing to the shooting direction of the image acquisition device in the current shooting position.
In some embodiments, determining the key frame image of the current shooting position according to the first ray, the second ray and the grid position corresponding to the multi-frame depth image of the current shooting position comprises determining the frame depth image as a candidate image of the current shooting position under the condition that the first ray and the second ray of a certain frame depth image of the current shooting position pass through the grid corresponding to the current shooting position, and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, determining the key frame image of the current photographing position in each candidate image according to the sharpness of each candidate image includes calculating edge feature information of each candidate image using an edge detection operator, and determining the sharpness of each candidate image according to the statistical features of each edge feature information.
In some embodiments, determining the key frame image of the target object at the corresponding shooting location in the multi-frame depth image at each shooting location includes determining each corresponding candidate image as each key frame image when there is a candidate image with a sharpness greater than a threshold, determining the corresponding key frame image in the multi-frame depth image at the current shooting location with the next shooting location of the image acquisition device as the current shooting location, and re-acquiring the multi-frame depth image at the current shooting location until there is a candidate image with a sharpness greater than the threshold when there is no candidate image with a sharpness greater than the threshold.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, the acquisition method further comprises the steps of responding to the image acquisition device to acquire multi-frame depth images at the current shooting position, performing first marking processing on corresponding grids of the current shooting position on the three-dimensional model, responding to the key frame images of the current shooting position, and performing second marking processing on the corresponding grids of the current shooting position on the three-dimensional model, wherein the second marking processing is used for marking that the key frame images of the current shooting position are acquired.
In some embodiments, the acquisition method further comprises three-dimensional reconstruction of the target object according to the key frame images of the target object at each shooting position.
According to other embodiments of the present disclosure, a three-dimensional reconstruction method of a key frame image is provided, which includes acquiring a key frame image of a target object at each shooting position by using the method for acquiring a key frame image of any one of the embodiments, and performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still other embodiments of the present disclosure, there is provided a key frame image acquisition apparatus including a point cloud determining unit configured to determine point cloud data of a target object based on a multi-frame depth image of the target object acquired by an image acquiring device at a current photographing position and an internal parameter of the image acquiring device, a creating unit configured to create a three-dimensional model capable of surrounding the target object based on position information of the point cloud data of the target object, the internal parameter of the image acquiring device, and an external parameter at the current photographing position, a surface of the three-dimensional model being divided into a plurality of grids, each grid corresponding to one photographing position, and a key frame determining unit configured to determine a key frame image of the target object at the corresponding photographing position in the multi-frame depth image of each photographing position based on the grids corresponding to each photographing position, the photographing position of the multi-frame depth image acquired at each photographing position, and the photographing direction.
In some embodiments, the establishing unit calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquisition device, and establishes the three-dimensional model according to the projection matrix, the position information of the vertexes of each grid and the external parameters of the shooting positions of the image acquisition device corresponding to each grid.
In some embodiments, the establishing unit performs plane fitting according to the point cloud data of the target object, determines the plane where the target object is located, and establishes the three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishing unit determines the center position according to the position of the target object on the plane model, and establishes a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the center position.
In some embodiments, the key frame determining unit determines the key frame image of the current shooting position according to a first ray, a second ray, and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, where the second ray is a ray pointing to a center point of a plane where the three-dimensional model is located, with a position of the image obtaining device in a world coordinate system as a starting point, and the first ray is pointing to a shooting direction of the image obtaining device in the current shooting position.
In some embodiments, the key frame determining unit determines a certain frame depth image of the current shooting position as a candidate image of the current shooting position under the condition that the first ray and the second ray of the frame depth image pass through a grid corresponding to the current shooting position, and determines the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, the key frame determining unit calculates edge feature information of each candidate image using an edge detection operator, and determines sharpness of each candidate image based on statistical features of each edge feature information.
In some embodiments, the key frame determining unit determines each corresponding candidate image as each key frame image when the candidate image with the definition being greater than the threshold value exists, determines each corresponding key frame image from among the multiple frame depth images at the current shooting position by taking the next shooting position of the image acquiring device as the current shooting position, and acquires the multiple frame depth images again at the current shooting position until the candidate image with the definition being greater than the threshold value exists when the candidate image with the definition being greater than the threshold value does not exist.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, the acquisition device further comprises a marking unit, which is used for responding to the image acquisition device to acquire multi-frame depth images at the current shooting position, performing first marking processing on the corresponding grid of the current shooting position on the three-dimensional model, and performing second marking processing on the corresponding grid of the current shooting position on the three-dimensional model to identify that the key frame image of the current shooting position is acquired.
In some embodiments, the acquisition device further comprises a reconstruction unit, configured to reconstruct the target object in three dimensions according to the keyframe images of the target object at each shooting position.
According to still other embodiments of the present disclosure, a three-dimensional reconstruction device for a key frame image is provided, which includes an acquisition unit configured to perform the method for acquiring a key frame image of any one of the above embodiments, and acquire a key frame image of a target object at each photographing position, and a reconstruction unit configured to perform three-dimensional reconstruction on the target object according to the key frame image of the target object at each photographing position.
According to still further embodiments of the present disclosure, there is provided an electronic device comprising a memory, and a processor coupled to the memory, the processor being configured to perform the method of acquisition of a key frame image or the method of three-dimensional reconstruction of a key frame image in any of the embodiments described above based on instructions stored in the memory means.
According to still further embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of acquiring a key frame image or the method of three-dimensional reconstruction of a key frame image in any of the above embodiments.
According to still other embodiments of the present disclosure, there is provided a key frame image acquisition system including a key frame image acquisition device configured to perform the key frame image acquisition method in any one of the above embodiments, and an image acquisition device configured to acquire multiple frame depth images of a target object at different shooting positions.
In the above embodiment, a three-dimensional model capable of surrounding the target object is established, and the acquisition of the key frame image is guided according to the grids corresponding to the shooting positions on the surface of the three-dimensional model. Therefore, the acquisition direction of the key frame images can be synchronously guided when the object is scanned, so that the acquired key frames can completely cover all shooting angles, and the acquisition quality of the key frame images is improved.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the authorization specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In view of the above technical problems, in an embodiment of the present disclosure, a user may scan a mobile phone or a tablet computer carrying a depth camera around a target object by hand for one week, and collect a certain number of depth images at each shooting position.
In this way, the collected depth image can cover all angles of the object, and a sufficiently clear depth image can be selected as a key frame image in a certain shooting angle range, so that data redundancy is reduced.
In addition, in the acquisition process, the key frame images can be synchronously determined from the depth images acquired from all shooting positions, so that the efficiency of key frame image acquisition is improved, visual feedback is also given to a user, the user can immediately know whether the corresponding shooting positions and angles have acquired the key frame images, and therefore the key frame images can cover the complete angle range.
In addition, the technical scheme of the present disclosure does not need to erect a camera array, and reduces hardware cost. For example, the technical solution of the present disclosure may be implemented by the following embodiments.
Fig. 1 illustrates a flow chart of some embodiments of a method of acquisition of a key frame image of the present disclosure.
As shown in FIG. 1, the method includes determining point cloud data of a target object, step 120, building a three-dimensional model surrounding the target object, and step 130, determining a keyframe image.
In step 110, point cloud data of the target object is determined according to the multi-frame depth image of the target object acquired by the image acquisition device at the current shooting position and internal parameters of the image acquisition device. For example, multiple frames of depth images may be acquired at the current photographing position, or the orientation of the image acquisition device may be adjusted so that multiple frames of depth images are acquired from different photographing directions.
In some embodiments, an object to be scanned, i.e., a target object, may be placed on a suitable plane (e.g., a desktop, a ground, etc.), and a user holds a device such as a mobile phone or a tablet computer with a depth camera to align to the target object, so as to collect corresponding RGB-D (Red Green Blue-Deep) data, and calculate point cloud data required for three-dimensional reconstruction according to the collected RGB-D data.
For example, the image acquisition device is a depth camera, and the internal parameter matrix is:
fx and fy are focal lengths of the depth camera, and cx and cy are principal point coordinates of the depth camera. The internal parameter matrix does not change with the position movement of the depth camera.
The depth value at the point (u, v) on a depth image of a certain frame is d, and the point cloud coordinate corresponding to the point can be determined as follows:
In step 120, a three-dimensional model capable of surrounding the target object is created based on the position information of the point cloud data of the target object, the internal parameters of the image acquisition apparatus, and the external parameters at the current photographing position. The surface of the three-dimensional model is divided into a plurality of grids, each corresponding to a photographing position.
In some embodiments, after obtaining the point cloud data of each frame of depth image of the current shooting position, the pose matrix (i.e., the external parameters of the camera) corresponding to each frame of depth image may be solved by ICP (ITERATIVE CLOSEST POINT ) algorithm or the like:
I.e. the camera external parameters consist of a rotation matrix R and a translation column vector T:
the camera external parameters are related to the shooting position.
In some embodiments, performing plane fitting according to point cloud data of the target object, determining a plane where the target object is located, and building a three-dimensional model according to the position of the target object on the plane. For example, the position of the center of a circle is determined according to the position of the target object on the plane model, and a three-dimensional hemispherical model is established on the plane model according to the position of the center of a circle to serve as a three-dimensional model.
For example, after acquiring enough point cloud data, an algorithm such as RANSAC (RANdom SAmple Consensus ) may be used to perform a plane fitting process.
Firstly, three points can be randomly selected from point cloud data, a candidate plane is determined according to the three points, then, the distance between all other points and the candidate plane is calculated, if the distance is smaller than a distance threshold value, the corresponding point is considered to be located in the candidate plane and marked as an inner point, otherwise, the corresponding point is marked as an outlier, if the number of the inner points exceeds a number threshold value, the candidate plane is saved as a plane model, and if the number of the inner points does not exceed the number threshold value, the three points are selected again to determine the candidate plane.
For example, after a plane in which the target object is placed is obtained, the three-dimensional hemispherical model may be rendered to a corresponding location on the plane (e.g., efficiently rendered using openGL) so that the three-dimensional hemispherical model can enclose the target object therein.
In order to ensure that the rendered three-dimensional hemispherical model is consistent with a physical space (namely a world coordinate system), namely, the position of the three-dimensional hemispherical model in the world coordinate system is not changed along with the position of a camera, a three-dimensional hemispherical model can be established by using a real-time projection matrix and a gesture matrix of the current shooting position.
In some embodiments, a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system is calculated according to internal parameters of the image acquisition device, and the three-dimensional model is built according to the projection matrix, the position information of the vertexes of each grid and external parameters of the image acquisition device at shooting positions corresponding to each grid.
For example, according to the internal parameter matrix of the depth camera, a projection matrix P corresponding to the current shooting position is calculated:
width and height are the width and height, respectively, of the depth image, far is the current far plane of the depth camera, and near is the current near plane.
Multiple sets of vertices on the three-dimensional model surface may be provided, each set of vertices defining a mesh. When the coordinates of a certain group of vertices are M, gl_position=p×v×m is rendered. And according to the gl_position values corresponding to the vertexes of each group, a three-dimensional model with the surface divided into a plurality of grids can be established. The grids corresponding to different shooting positions are used as a part of a three-dimensional model UI (User Interface), can be used as screening basis of key frame images, and can also guide acquisition of key frame images covering a complete angle range.
In step 130, a key frame image of the target object at the corresponding photographing position is determined from the multiple frames of depth images at each photographing position according to the grid corresponding to each photographing position, the photographing positions and the photographing directions of the multiple frames of depth images acquired at each photographing position.
In some embodiments, a key frame image may be determined according to the embodiment in fig. 2.
Fig. 2 shows a flow chart of some embodiments of step 130 in fig. 1.
As shown in fig. 2, step 130 includes determining a first ray and a second ray of a current photographing position, step 1320, determining a sharpness of each candidate image, step 1330, determining whether the sharpness is greater than a threshold, step 1340, acquiring a depth image again, step 1350, determining a key frame image, and step 1360, moving the image acquisition apparatus.
In step 1310, a first ray is determined from the shooting direction of the current shooting position with the position of the image acquisition device in the world coordinate system as a starting point, and a ray pointing to the center point of the three-dimensional model on the plane of the target object is determined as a second ray with the position of the image acquisition device in the world coordinate system as a starting point.
The first ray characterizes shooting direction information of the image acquisition equipment, and the second ray characterizes current position information of the image acquisition equipment, and can be used as a basis for screening key frame images.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, a key frame image of the current shooting position is determined according to a first ray, a second ray of a multi-frame depth image of the current shooting position and a grid position corresponding to the current shooting position. For example, candidate images for a key frame image may be determined by the embodiment in fig. 3.
Fig. 3 illustrates a schematic diagram of some embodiments of a method of acquisition of a key frame image of the present disclosure.
As shown in fig. 3, the target object is completely enclosed within the three-dimensional hemispherical model, with the first ray of the image acquisition device being l1 and the second ray being l2.
In some embodiments, the position pcamera of the depth camera in the world coordinate system and the camera orientation dcamera may be derived from the external parameters R and T:
The current position of the depth camera is taken as an endpoint, the direction dcamera of the depth camera is taken as a direction ray l1 to be taken as a direction ray, and the connection line po-pcamera of the depth camera and the sphere center po is taken as a direction ray l2 to be taken as a position ray. The key frame images can be screened according to grids corresponding to the current shooting position, the camera orientation and the direction of the connecting line of the camera and the sphere center.
In some embodiments, a frame depth image of a current shooting location is determined as a candidate image of the current shooting location if both the first ray and the second ray of the frame depth image pass through a grid corresponding to the current shooting location. For example, rays l1 and l2 in the graph both pass through a grid corresponding to the current shooting position, which indicates that the position ray and the orientation ray of the current depth camera are both aligned with the target object, and then the corresponding depth image is determined as a candidate image.
After the candidate images are determined, a key frame image of the current photographing position may be determined in each candidate image according to the sharpness of each candidate image through the remaining steps in fig. 2.
In step 1320, the sharpness of each candidate image is determined.
In some embodiments, edge feature information of each candidate image is calculated by using an edge detection operator (such as Laplace operator, sobel operator, canny operator, etc.), and the definition of each candidate image is determined according to the statistical feature of each edge feature information.
For example, a Laplace operator may be used for sharpness detection. The method comprises the steps of converting an image into a gray level image, processing by using a Laplace operator, and calculating the variance of the Laplace operator processing result to be used as the definition of the current image.
In step 1330, it is determined whether the sharpness of each candidate image is greater than a threshold. Step 1340 is performed in the absence of a candidate image having a sharpness greater than the threshold, and step 1350 is performed in the presence of a candidate image having a sharpness greater than the threshold.
In step 1340, a plurality of frames of depth images are acquired again at the current shooting position, and steps 1310 to 1330 are repeatedly performed until there are candidate images with sharpness greater than the threshold.
In step 1350, candidate images having a sharpness greater than the threshold are determined to be key frame images.
In step 1360, the image acquisition apparatus is moved, and a plurality of frames of depth images are acquired again (step 1340). And determining a corresponding key frame image in the multi-frame depth image of the current shooting position by taking the next shooting position of the image acquisition device as the current shooting position (repeatedly executing steps 1310-1360).
In some embodiments, a three-dimensional model rendered near the target object may be directed as a key acquisition UI at the image.
In some embodiments, in response to an image acquisition device acquiring a multi-frame depth image at a current shooting location, a first marking process is performed on a corresponding grid of the current shooting location on a three-dimensional model. For example, when the depth camera is aimed at a certain shooting angle, a corresponding grid may be illuminated (e.g., changed in color, etc.) to indicate the coverage angle of the currently acquired key frame image.
In some embodiments, in response to determining the key frame image of the current shooting location, performing a second marking process on the three-dimensional model on the corresponding grid of the current shooting location for identifying that the key frame image of the current shooting location is completely acquired.
For example, if the sharpness is greater than the threshold, the sharpness is saved as a key frame image, the color of the corresponding grid of the key frame image is changed, the current angle is indicated that the key frame image is acquired, and the camera can be moved to the next shooting position to continue acquisition. When the key frame images are acquired at all angles, all grids change colors, which means that the key frame images covering the complete angle range are acquired, and scanning can be completed according to requirements.
Fig. 4 illustrates a flow chart of some embodiments of a method of three-dimensional reconstruction of a key frame image of the present disclosure.
As shown in fig. 4, in step 410, the key frame image of the target object at each shooting position is acquired by using the method for acquiring the key frame image of any of the above embodiments.
In step 420, the target object is three-dimensionally reconstructed from the key frame images of the target object at each photographing position.
In this way, since the key frame image covering the complete angle range is acquired, the effect of three-dimensional reconstruction can be improved.
Fig. 5 illustrates a block diagram of some embodiments of a key frame image acquisition device of the present disclosure.
As shown in fig. 5, the acquisition device 5 of the key frame image includes a point cloud determination unit 51, a creation unit 52, and a key frame determination unit 53.
The point cloud determining unit 51 determines point cloud data of the target object based on the multi-frame depth image of the target object acquired by the image acquiring apparatus at the current photographing position and the internal parameters of the image acquiring apparatus.
The establishing unit 52 establishes a three-dimensional model capable of surrounding the target object based on the position information of the point cloud data of the target object, the internal parameters of the image acquisition apparatus, and the external parameters at the current photographing position. The surface of the three-dimensional model is divided into a plurality of grids, each corresponding to a photographing position.
The key frame determination unit 53 determines a key frame image of the target object at the corresponding photographing position from among the multiple frames of depth images at the respective photographing positions, based on the mesh corresponding to the respective photographing positions, the photographing positions of the multiple frames of depth images acquired at the respective photographing positions, and the photographing direction.
In some embodiments, the establishing unit 52 calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquiring device, and establishes the three-dimensional model according to the projection matrix, the position information of the vertexes of each grid and the external parameters of the shooting positions of the image acquiring device corresponding to each grid.
In some embodiments, the establishing unit 52 performs plane fitting according to the point cloud data of the target object, determines the plane in which the target object is located, and establishes the three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishing unit 52 determines the center position according to the position of the target object on the planar model, and establishes a three-dimensional hemispherical model on the planar model as a three-dimensional model according to the center position.
In some embodiments, the key frame determining unit 53 determines the key frame image of the current photographing position according to the first ray, the second ray, and the grid position corresponding to the current photographing position of the multi-frame depth image of the current photographing position. The second ray is a ray pointing to the center point of the three-dimensional model on the plane of the target object by taking the position of the image acquisition device in the world coordinate system as a starting point, and the first ray points to the shooting direction of the image acquisition device in the current shooting position.
In some embodiments, the key frame determining unit 53 determines a certain frame depth image of the current photographing position as a candidate image of the current photographing position if the first ray and the second ray of the frame depth image pass through the grid corresponding to the current photographing position, and determines a key frame image of the current photographing position in each candidate image according to the sharpness of each candidate image.
In some embodiments, the key frame determining unit 53 calculates edge feature information of each candidate image using an edge detection operator, and determines sharpness of each candidate image based on statistical features of each edge feature information.
In some embodiments, the key frame determining unit 53 determines, as each key frame image, a corresponding candidate image in the case where there is a candidate image having a sharpness greater than the threshold value, and determines, from among the multiple frame depth images at the current photographing position, a corresponding key frame image with a next photographing position of the image capturing apparatus as the current photographing position, and acquires, in the case where there is no candidate image having a sharpness greater than the threshold value, the multiple frame depth images again at the current photographing position until there is a candidate image having a sharpness greater than the threshold value.
In some embodiments, the position and shooting direction of the image acquisition device in the world coordinate system are calculated according to external parameters of the image acquisition device at the current shooting position.
In some embodiments, the acquisition device 5 further comprises a marking unit 54, configured to perform a first marking process on the three-dimensional model for the corresponding grid of the current shooting position in response to the image acquisition device acquiring the multi-frame depth image at the current shooting position, and perform a second marking process on the three-dimensional model for the corresponding grid of the current shooting position in response to determining the key frame image of the current shooting position, wherein the second marking process is used for identifying that the key frame image of the current shooting position is acquired.
In some embodiments, the acquisition device 5 further comprises a reconstruction unit 55 for performing three-dimensional reconstruction of the target object based on the key frame images of the target object at each photographing position.
Fig. 6 illustrates a block diagram of some embodiments of a three-dimensional reconstruction apparatus of a key frame image of the present disclosure.
As shown in fig. 6, the three-dimensional reconstruction device 6 for a key frame image includes an acquisition unit 61 for performing the above-described key frame image acquisition method of any one of the embodiments, acquiring key frame images of a target object at each photographing position, and a reconstruction unit 62 for performing three-dimensional reconstruction of the target object based on the key frame images of the target object at each photographing position.
Fig. 7 illustrates a block diagram of some embodiments of an electronic device of the present disclosure.
As shown in fig. 7, the electronic device 7 of this embodiment includes a memory 71 and a processor 72 coupled to the memory 71, the processor 72 being configured to execute the acquisition method of the key frame image or the three-dimensional reconstruction method of the key frame image in any of the above embodiments based on instructions stored in the memory 71.
The memory 71 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, application programs, boot Loader, database, and other programs.
Fig. 8 shows a block diagram of further embodiments of the electronic device of the present disclosure.
As shown in fig. 8, the electronic device 8 of this embodiment includes a memory 810 and a processor 820 coupled to the memory 810, the processor 820 being configured to execute the acquisition method of the key frame image or the three-dimensional reconstruction method of the key frame image in any of the foregoing embodiments based on instructions stored in the memory 810.
Memory 810 may include, for example, system memory, fixed nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot Loader, and other programs.
The electronic device 8 may also include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and the processor 820 may be connected by, for example, a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen, a microphone, and a speaker. The network interface 840 provides a connection interface for various networking devices. Storage interface 850 provides a connection interface for external storage devices such as SD cards, U-discs, and the like.
Fig. 9 illustrates a block diagram of some embodiments of an acquisition system of a key frame image of the present disclosure.
As shown in fig. 9, the key frame image acquisition system 9 includes a key frame image acquisition device 91 for performing the key frame image acquisition method in any of the above embodiments, and an image acquisition apparatus 92 for acquiring multi-frame depth images of a target object at different photographing positions.
It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having computer-usable program code embodied therein.
Up to this point, the method of acquiring a key frame image, the apparatus of acquiring a key frame image, the system of acquiring a key frame image, the method of three-dimensional reconstruction of a key frame image, the apparatus of three-dimensional reconstruction of a key frame image, and the non-transitory computer readable storage medium according to the present disclosure have been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
The methods and systems of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.