Disclosure of Invention
The inventors of the present disclosure found that the following problems exist in the above-described related art: the shooting angle covered by the key frame image is not comprehensive, so that the quality of key frame image acquisition is reduced; or the cost of acquiring a full angle keyframe image is prohibitive.
In view of this, the present disclosure provides a key frame image acquisition technical solution, which can acquire a key frame image of a complete shooting angle without increasing cost, thereby improving the acquisition quality of the key frame image.
According to some embodiments of the present disclosure, there is provided a method for acquiring a key frame image, including: determining point cloud data of a target object according to a multi-frame depth image of the target object acquired by an image acquisition device at a current shooting position and internal parameters of the image acquisition device; establishing a three-dimensional model capable of surrounding a target object according to position information of point cloud data of the target object, internal parameters of image acquisition equipment and external parameters at a current shooting position, wherein the surface of the three-dimensional model is divided into a plurality of grids, and each grid corresponds to one shooting position; and determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth images at the shooting positions according to the grids corresponding to the shooting positions, the shooting positions and the shooting directions of the multi-frame depth images acquired at the shooting positions.
In some embodiments, creating a three-dimensional model that can enclose the target object comprises: calculating a projection matrix of the three-dimensional model from a three-dimensional coordinate system to a two-dimensional coordinate system according to the internal parameters of the image acquisition equipment; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
In some embodiments, creating a three-dimensional model that can enclose the target object comprises: performing plane fitting according to the point cloud data of the target object to determine a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane.
In some embodiments, building the three-dimensional model based on the position of the target object on the planar model comprises: determining the position of a circle center according to the position of a target object on the plane model; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
In some embodiments, determining the key frame image of the target object at the corresponding shooting position in the multiple frames of depth images at the shooting positions comprises: determining a key frame image of the current shooting position according to a first ray, a second ray and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, wherein the second ray is a ray which points to the central point of the three-dimensional model on the plane of the target object from the position of the image acquisition equipment in the world coordinate system as a starting point, and the first ray points to the shooting direction of the image acquisition equipment at the current shooting position.
In some embodiments, determining the key frame image of the current shooting position according to the first ray, the second ray of the multi-frame depth image of the current shooting position and the grid position corresponding to the current shooting position includes: under the condition that a first ray and a second ray of a certain frame depth image of the current shooting position both penetrate through a grid corresponding to the current shooting position, determining the frame depth image as a candidate image of the current shooting position; and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, determining the key frame image of the current photographing position in each candidate image according to the degree of sharpness of each candidate image includes: calculating edge characteristic information of each candidate image by using an edge detection operator; and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
In some embodiments, determining the key frame image of the target object at the corresponding shooting position in the multiple frames of depth images at the shooting positions comprises: under the condition that a candidate image with the definition larger than a threshold exists, determining each corresponding candidate image as each key frame image, taking the next shooting position of the image acquisition equipment as the current shooting position, and determining the corresponding key frame image in the multi-frame depth image of the current shooting position; and under the condition that no candidate image with the definition larger than the threshold exists, acquiring the multi-frame depth images again at the current shooting position until the candidate image with the definition larger than the threshold exists.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, the acquisition method further comprises: responding to the image acquisition equipment to acquire a plurality of frames of depth images at the current shooting position, and performing first marking processing on a corresponding grid at the current shooting position on the three-dimensional model; and responding to the key frame image of the current shooting position, and performing second marking processing on the corresponding grid of the current shooting position on the three-dimensional model to identify that the key frame image of the current shooting position is completely acquired.
In some embodiments, the acquisition method further comprises: and performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to other embodiments of the present disclosure, there is provided a method for three-dimensional reconstruction of a key frame image, including: acquiring the key frame image of the target object at each shooting position by using the key frame image acquisition method of any one of the embodiments; and performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still other embodiments of the present disclosure, there is provided an apparatus for acquiring a key frame image, including: the point cloud determining unit is used for determining point cloud data of the target object according to the multi-frame depth image of the target object acquired by the image acquiring device at the current shooting position and the internal parameters of the image acquiring device; the system comprises an establishing unit, a processing unit and a processing unit, wherein the establishing unit is used for establishing a three-dimensional model capable of surrounding a target object according to position information of point cloud data of the target object, internal parameters of image acquisition equipment and external parameters at a current shooting position, the surface of the three-dimensional model is divided into a plurality of grids, and each grid corresponds to one shooting position; and the key frame determining unit is used for determining the key frame image of the target object at the corresponding shooting position in the multi-frame depth images at the shooting positions according to the grids corresponding to the shooting positions, the shooting positions and the shooting directions of the multi-frame depth images acquired at the shooting positions.
In some embodiments, the building unit calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to internal parameters of the image acquisition device; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
In some embodiments, the establishing unit performs plane fitting according to the point cloud data of the target object to determine a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishing unit determines the circle center position according to the position of the target object on the plane model; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
In some embodiments, the key frame determining unit determines the key frame image of the current shooting position according to a first ray, a second ray and a grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position, the second ray is a ray pointing to a center point of the three-dimensional model in a plane where the target object is located, the position of the image obtaining device in a world coordinate system is taken as a starting point, and the first ray points to a shooting direction of the image obtaining device in the current shooting position.
In some embodiments, the key frame determining unit determines a frame depth image as a candidate image of the current shooting position when both a first ray and a second ray of the frame depth image of the current shooting position pass through a grid corresponding to the current shooting position; and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, the key frame determination unit calculates edge feature information of each candidate image using an edge detection operator; and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
In some embodiments, the key frame determining unit determines, in the presence of candidate images with a definition greater than a threshold, respective candidate images as key frame images, determines, with a next shooting position of the image acquisition device as a current shooting position, a respective key frame image in a multi-frame depth image of the current shooting position; and under the condition that no candidate image with the definition larger than the threshold exists, acquiring the multi-frame depth images again at the current shooting position until the candidate image with the definition larger than the threshold exists.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, the acquisition device further comprises: and the marking unit is used for responding to the image acquisition equipment to acquire multi-frame depth images at the current shooting position, carrying out first marking processing on the corresponding grid of the current shooting position on the three-dimensional model, responding to the key frame image of the current shooting position, and carrying out second marking processing on the corresponding grid of the current shooting position on the three-dimensional model, and is used for marking that the key frame image of the current shooting position is completely acquired.
In some embodiments, the acquisition device further comprises: and the reconstruction unit is used for performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still further embodiments of the present disclosure, there is provided a three-dimensional reconstruction apparatus of a key frame image, including: the acquisition unit is used for executing the acquisition method of the key frame image of any one of the embodiments and acquiring the key frame image of the target object at each shooting position; and the reconstruction unit is used for performing three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
According to still further embodiments of the present disclosure, there is provided an electronic device including: a memory; and a processor coupled to the memory, the processor configured to perform the method for acquiring a keyframe image or the method for three-dimensional reconstruction of a keyframe image in any of the above embodiments based on instructions stored in the memory device.
According to still further embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for acquiring a keyframe image or the method for three-dimensional reconstruction of a keyframe image in any of the above-mentioned embodiments.
According to still further embodiments of the present disclosure, there is provided a key frame image acquisition system including: a key frame image acquisition device for executing the key frame image acquisition method in any of the above embodiments; and the image acquisition equipment is used for acquiring multi-frame depth images of the target object at different shooting positions.
In the above embodiment, a three-dimensional model capable of surrounding the target object is established, and the collection of the keyframe image is guided according to the grids corresponding to the shooting positions on the surface of the three-dimensional model. Therefore, the key frame image acquisition direction can be synchronously guided when the object is scanned, so that the acquired key frame can completely cover each shooting angle, and the acquisition quality of the key frame image is improved.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In view of the above technical problems, in the embodiment of the present disclosure, a user may scan a circle around a target object by holding a mobile phone or a tablet computer carrying a depth camera, and a certain number of depth images are collected at each shooting position.
Therefore, the acquired depth image can cover all angles of the object, and a sufficiently clear depth image can be selected as a key frame image in a certain shooting angle range, so that data redundancy is reduced.
In addition, in the acquisition process, the key frame images can be synchronously determined from the depth images acquired from all shooting positions, so that the acquisition efficiency of the key frame images is improved; and visual feedback is given to the user, so that the user can immediately know whether the corresponding shooting position and angle have collected the key frame image, and the key frame image can cover a complete angle range.
In addition, the technical scheme of the disclosure does not need to erect a camera array, and the hardware cost is reduced. For example, the technical solution of the present disclosure can be realized by the following embodiments.
Fig. 1 illustrates a flow diagram of some embodiments of a method of capturing keyframe images of the present disclosure.
As shown in fig. 1, the method includes:step 110, determining point cloud data of a target object;step 120, establishing a three-dimensional model surrounding the target object; and step 130, determining a key frame image.
Instep 110, point cloud data of the target object is determined according to the multi-frame depth image of the target object acquired by the image acquisition device at the current shooting position and the internal parameters of the image acquisition device. For example, it is possible to acquire a plurality of frames of depth images at the current photographing position, or to adjust the orientation of the image capturing apparatus so as to acquire a plurality of frames of depth images from different photographing directions.
In some embodiments, an object to be scanned, i.e., a target object, may be placed on a suitable plane (e.g., a desktop, a ground, etc.), and a user holds a mobile phone or a tablet computer, etc. with a depth camera, and aligns the object to collect corresponding RGB-D (Red Green Blue-Deep) data; and calculating point cloud data required by three-dimensional reconstruction according to the acquired RGB-D data.
For example, the image acquisition device is a depth camera, and the internal parameter matrix is:
fxand fyIs the focal length of the depth camera, cxAnd cyAre the principal point coordinates of the depth camera. The internal parameter matrix does not change with the position movement of the depth camera。
The depth value at a point (u, v) on a certain frame depth image is d, and the point cloud coordinate corresponding to the point can be determined as:
instep 120, a three-dimensional model capable of surrounding the target object is built according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device, and the external parameters at the current shooting position. The surface of the three-dimensional model is divided into a plurality of meshes, and each mesh corresponds to one shooting position.
In some embodiments, after Point cloud data of each frame of depth image of the current shooting position is obtained, an attitude matrix (i.e., camera external parameters) corresponding to each frame of depth image may be solved through algorithms such as ICP (Iterative Closest Point):
that is, the camera extrinsic parameters consist of the rotation matrix R and the translation column vector T:
the camera extrinsic parameter is related to a shooting position.
In some embodiments, performing plane fitting according to the point cloud data of the target object, and determining a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane. For example, according to the position of the target object on the plane model, the circle center position is determined; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
For example, after sufficient point cloud data is acquired, plane fitting processing may be performed by using an algorithm such as RANSAC (RANdom SAmple Consensus).
Firstly, randomly selecting three points from point cloud data, and determining a candidate plane according to the three points; then, calculating the distances from all other points to the candidate plane; if the distance is smaller than the distance threshold value, the corresponding point is considered to be in the candidate plane and is marked as an inner point; otherwise, marking as an outlier; if the number of the interior points exceeds a number threshold, saving the candidate plane as a plane model; if the number of interior points does not exceed the number threshold, three points are reselected to determine a candidate plane.
For example, after obtaining a plane on which the target object is placed, the three-dimensional hemispherical model may be rendered to a corresponding position on the plane (e.g., efficiently rendered using openGL), so that the three-dimensional hemispherical model can enclose the target object therein.
In order to ensure that the rendered three-dimensional hemisphere model is consistent with a physical space (namely a world coordinate system), namely that the position of the three-dimensional hemisphere model in the world coordinate system is not changed along with the position of a camera, the three-dimensional hemisphere model can be established by using a real-time projection matrix and a posture matrix of the current shooting position.
In some embodiments, a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system is calculated based on internal parameters of the image acquisition device; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
For example, according to the internal parameter matrix of the depth camera, a projection matrix P corresponding to the current shooting position is calculated:
width and height are the width and height, respectively, of the depth image, far is the current far plane of the depth camera, near is the current near plane.
Sets of vertices on the surface of the three-dimensional model may be provided, each set of vertices may define a mesh. When the coordinate of a certain set of vertices is M, the rendering time is gl _ Position ═ P × V × M. According to the corresponding gl _ Position values of all groups of vertexes, a three-dimensional model of the surface divided into a plurality of grids can be established. The grids corresponding to different shooting positions are used as a part of a three-dimensional model UI (User Interface), can be used as a screening basis of key frame images, and can also be used for guiding the collection of key frame images covering a complete angle range.
Instep 130, a key frame image of the target object at the corresponding shooting position is determined in the multi-frame depth images at the shooting positions according to the grids corresponding to the shooting positions, the shooting positions of the multi-frame depth images obtained at the shooting positions and the shooting directions.
In some embodiments, the key frame images may be determined according to the embodiment in fig. 2.
Fig. 2 illustrates a flow diagram of some embodiments ofstep 130 in fig. 1.
As shown in fig. 2,step 130 includes:step 1310, determining a first ray and a second ray of a current shooting position;step 1320, determining the definition of each candidate image;step 1330, determining whether the sharpness is greater than a threshold;step 1340, acquiring the depth image again;step 1350, determining key frame images; and astep 1360 of moving image obtaining apparatus.
Instep 1310, a first ray is determined according to the shooting direction of the current shooting position by taking the position of the image acquisition device in the world coordinate system as a starting point; and determining a ray pointing to the central point of the three-dimensional model on the plane of the target object as a second ray by taking the position of the image acquisition equipment in the world coordinate system as a starting point.
The first ray represents shooting direction information of the image acquisition equipment, and the second ray represents current position information of the image acquisition equipment and can be used as a basis for screening key frame images.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, the key frame image of the current shooting position is determined according to the first ray, the second ray and the grid position corresponding to the current shooting position of the multi-frame depth image of the current shooting position. For example, candidate images for the key frame image may be determined by the embodiment in fig. 3.
Fig. 3 illustrates a schematic diagram of some embodiments of a method of acquiring a keyframe image of the present disclosure.
As shown in FIG. 3, the object is completely covered in the three-dimensional hemisphere model, and the first ray of the image acquisition device is l1The second ray is l2。
In some embodiments, the position p of the depth camera in the world coordinate system may be derived from the extrinsic parameters R and TcameraAnd camera orientation dcamera:
The orientation d of the depth camera is determined by taking the current position of the depth camera as an end pointcameraIs a direction ray l1As an orientation ray; taking the current position of the depth camera as an end point, the depth camera and the center of sphere poIs a connecting line po-pcameraIs a direction ray l2As a position ray. The keyframe images can be screened according to the grid corresponding to the current shooting position, the camera orientation and the connecting line direction of the camera and the sphere center.
In some embodiments, in the case that both the first ray and the second ray of a certain frame of depth image of the current shooting position pass through the grid corresponding to the current shooting position, the certain frame of depth image is determined as a candidate image of the current shooting position. For example, ray l in the figure1Ray l2And the depth images pass through the grids corresponding to the current shooting position, and the corresponding depth images are determined as candidate images when the position ray and the orientation ray of the current depth camera are aligned to the target object.
After the candidate images are determined, the key frame image of the current shooting position can be determined in each candidate image according to the definition degree of each candidate image through the remaining steps in fig. 2.
Instep 1320, the sharpness of each candidate image is determined.
In some embodiments, edge feature information of each candidate image is calculated by using an edge detection operator (such as a Laplace operator, a Sobel operator, a Canny operator, and the like); and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
For example, the clarity detection may be performed using the Laplace operator. The image can be converted into a gray scale image, and the Laplace operator is used for processing; and calculating the variance of the processing result of the Laplace operator as the definition of the current image.
Instep 1330, it is determined whether the sharpness of each candidate image is greater than a threshold. In the case that there is no candidate image with a definition greater than the threshold, performingstep 1340; in the case where there is a candidate image with a sharpness greater than the threshold,step 1350 is performed.
Instep 1340, acquiring multiple frames of depth images again at the current shooting position, and repeatedly executing steps 1310-1330 until candidate images with the definition larger than the threshold exist.
Instep 1350, candidate images with sharpness greater than the threshold are determined as key frame images.
Instep 1360, the image capture device is moved, and a plurality of frames of depth images are captured again (step 1340). And determining a corresponding key frame image in the multi-frame depth image of the current shooting position by taking the next shooting position of the image acquisition equipment as the current shooting position (repeatedly executing the steps 1310-1360).
In some embodiments, a three-dimensional model rendered in the vicinity of a target object may be directed as a collection UI for a key in image.
In some embodiments, in response to the image acquisition device acquiring a plurality of frames of depth images at the current shooting position, the first marking process is performed on the three-dimensional model for the corresponding mesh of the current shooting position. For example, when the depth camera is aimed at a certain capture angle, the corresponding grid may be illuminated (e.g., changed in color, etc.) to indicate the coverage angle of the currently acquired keyframe image.
In some embodiments, in response to determining the key frame image of the current shooting position, performing a second labeling process on the corresponding grid of the current shooting position on the three-dimensional model for identifying that the key frame image of the current shooting position is completely acquired.
For example, if the definition is greater than the threshold, the key frame image is saved as the key frame image, the color of the corresponding grid of the key frame image is changed to prompt that the key frame image is acquired at the current angle, and the camera can move to the next shooting position to continue the acquisition. When all the angles acquire the key frame images, all the grids change colors, which indicates that the key frame images covering the complete angle range are acquired, and the scanning can be completed according to the requirements.
Fig. 4 illustrates a flow diagram of some embodiments of a method of three-dimensional reconstruction of a keyframe image of the present disclosure.
As shown in fig. 4, instep 410, a key frame image of the target object at each shooting position is captured by using the key frame image capturing method according to any of the above embodiments.
Instep 420, a three-dimensional reconstruction of the target object is performed based on the key frame images of the target object at the respective shooting positions.
Thus, the effect of three-dimensional reconstruction can be improved because the key frame image covering the complete angle range is obtained.
Fig. 5 illustrates a block diagram of some embodiments of an acquisition apparatus of a key frame image of the present disclosure.
As shown in fig. 5, the key frameimage acquisition device 5 includes a pointcloud determination unit 51, acreation unit 52, and a keyframe determination unit 53.
The pointcloud determining unit 51 determines point cloud data of the target object based on the multi-frame depth image of the target object acquired by the image acquiring apparatus at the current photographing position and the internal parameters of the image acquiring apparatus.
Thebuilding unit 52 builds a three-dimensional model capable of surrounding the target object, based on the position information of the point cloud data of the target object, the internal parameters of the image acquisition apparatus, and the external parameters at the current shooting position. The surface of the three-dimensional model is divided into a plurality of meshes, and each mesh corresponds to one shooting position.
The keyframe determining unit 53 determines a key frame image of the target object at the corresponding shooting position from the multi-frame depth images at the respective shooting positions based on the mesh corresponding to the respective shooting positions, and the shooting positions and shooting directions of the multi-frame depth images acquired at the respective shooting positions.
In some embodiments, thebuilding unit 52 calculates a projection matrix of the three-dimensional model from the three-dimensional coordinate system to the two-dimensional coordinate system according to the internal parameters of the image acquisition device; and establishing a three-dimensional model according to the projection matrix, the position information of the top points of the grids and the external parameters of the image acquisition equipment at the shooting positions corresponding to the grids.
In some embodiments, the establishingunit 52 performs plane fitting according to the point cloud data of the target object to determine a plane where the target object is located; and establishing a three-dimensional model according to the position of the target object on the plane.
In some embodiments, the establishingunit 52 determines the circle center position according to the position of the target object on the plane model; and establishing a three-dimensional hemispherical model on the plane model as a three-dimensional model according to the circle center position.
In some embodiments, the keyframe determining unit 53 determines the key frame image of the current shooting position according to the first ray, the second ray of the multi-frame depth image of the current shooting position, and the grid position corresponding to the current shooting position. The second ray is a ray which takes the position of the image acquisition equipment in the world coordinate system as a starting point and points to the central point of the three-dimensional model in the plane where the target object is located, and the first ray points to the shooting direction of the image acquisition equipment at the current shooting position.
In some embodiments, the keyframe determining unit 53 determines a frame depth image of the current shooting position as a candidate image of the current shooting position when both the first ray and the second ray of the frame depth image pass through the grid corresponding to the current shooting position; and determining the key frame image of the current shooting position in each candidate image according to the definition degree of each candidate image.
In some embodiments, the keyframe determining unit 53 calculates edge feature information of each candidate image using an edge detection operator; and determining the definition of each candidate image according to the statistical characteristics of each edge characteristic information.
In some embodiments, in the presence of candidate images with a definition greater than a threshold, the keyframe determining unit 53 determines respective candidate images as key frame images, determines respective key frame images in the multiple frames of depth images at a current shooting position with a next shooting position of the image acquisition apparatus as the current shooting position; and under the condition that no candidate image with the definition larger than the threshold exists, acquiring the multi-frame depth images again at the current shooting position until the candidate image with the definition larger than the threshold exists.
In some embodiments, the position of the image capture device in the world coordinate system and the capture direction are calculated based on external parameters of the image capture device at the current capture location.
In some embodiments, theacquisition device 5 further comprises: a markingunit 54 configured to perform a first marking process on a corresponding mesh of a current shooting position on the three-dimensional model in response to the image acquisition apparatus acquiring a plurality of frames of depth images at the current shooting position; and responding to the key frame image of the current shooting position, and performing second marking processing on the corresponding grid of the current shooting position on the three-dimensional model to identify that the key frame image of the current shooting position is completely acquired.
In some embodiments, theacquisition device 5 further comprises: and thereconstruction unit 55 is configured to perform three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
Fig. 6 illustrates a block diagram of some embodiments of an apparatus for three-dimensional reconstruction of keyframe images of the present disclosure.
As shown in fig. 6, the three-dimensional reconstruction device 6 for a key frame image includes: anacquisition unit 61, configured to execute the method for acquiring a key frame image according to any one of the embodiments, and acquire key frame images of a target object at each shooting position; and areconstruction unit 62, configured to perform three-dimensional reconstruction on the target object according to the key frame image of the target object at each shooting position.
Fig. 7 illustrates a block diagram of some embodiments of an electronic device of the present disclosure.
As shown in fig. 7, theelectronic apparatus 7 of this embodiment includes: amemory 71 and aprocessor 72 coupled to thememory 71, wherein theprocessor 72 is configured to execute the method for acquiring a key frame image or the method for three-dimensional reconstruction of a key frame image according to any of the above embodiments based on instructions stored in thememory 71.
Thememory 71 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader, a database, and other programs.
Fig. 8 shows a block diagram of further embodiments of the electronic device of the present disclosure.
As shown in fig. 8, theelectronic apparatus 8 of this embodiment includes: amemory 810 and aprocessor 820 coupled to thememory 810, theprocessor 820 being configured to execute the method for acquiring a keyframe image or the method for three-dimensional reconstruction of a keyframe image in any of the above embodiments based on instructions stored in thememory 810.
Memory 810 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader, and other programs.
Theelectronic device 8 may also include an input-output interface 830, anetwork interface 840, astorage interface 850, and the like. Theseinterfaces 830, 840, 850 and between thememory 810 and theprocessor 820 may be connected, for example, by abus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen, a microphone, and a speaker. Thenetwork interface 840 provides a connection interface for various networking devices. Thestorage interface 850 provides a connection interface for external storage devices such as an SD card and a usb disk.
Fig. 9 illustrates a block diagram of some embodiments of a keyframe image acquisition system of the present disclosure.
As shown in fig. 9, the key frameimage acquisition system 9 includes: a key frame image acquisition device 91 for executing the key frame image acquisition method in any of the above embodiments; and theimage acquisition device 92 is used for acquiring multi-frame depth images of the target object at different shooting positions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media having computer-usable program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
So far, the acquisition method of a key frame image, the acquisition apparatus of a key frame image, the acquisition system of a key frame image, the three-dimensional reconstruction method of a key frame image, the three-dimensional reconstruction apparatus of a key frame image, and the non-volatile computer-readable storage medium according to the present disclosure have been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The method and system of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.