Disclosure of Invention
The invention aims to provide a three-dimensional modeling fusion method and device aiming at the defects in the prior art, so as to solve the problems of high modeling cost and low modeling efficiency in the prior art.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a three-dimensional modeling fusion method, including: acquiring a plurality of sampling images, and establishing a live-action three-dimensional model of a target object in a first format according to the plurality of sampling images;
acquiring multiple key area pictures of the sampling image, and establishing a three-dimensional model of the key area in a second format according to the key area pictures, wherein the second format and the first format are different in format;
and fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object.
Further, the fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object includes:
deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model;
and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional models of the remaining live-action scenes are triangulation network models;
the splicing the three-dimensional model of the key area to the remaining live-action three-dimensional model and fusing the three-dimensional model of the key area and the edges of the remaining live-action three-dimensional model to obtain the optimized model of the target object includes:
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model;
and fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the remaining live-action three-dimensional models to obtain the optimized model of the target object.
Further, the acquiring a plurality of sampling images and establishing a live-action three-dimensional model of the target object in the first format according to the plurality of sampling images includes:
acquiring a plurality of sampling images, adding the plurality of sampling images into a coordinate system of a control point, and performing space-three encryption operation to obtain a plurality of sampling image external orientation elements, wherein the sampling image external orientation elements are image postures;
generating a white mode of the target object according to the sampling image exterior orientation element;
then according to an image matching algorithm, the homonymous points of a plurality of sampling images are obtained;
generating target objects corresponding to the sampling images according to the homonymous points of the sampling images;
calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Further, the acquiring of the multiple images of the key area in the sampling image and the establishing of the three-dimensional model of the key area in the second format according to the images of the key area include:
acquiring pictures of key areas in a plurality of sampling images and generating contour lines of the key areas in each sampling image;
generating a white mould of the key area according to the contour line;
and mapping the texture information of the key area to a white model of the key area to generate the three-dimensional model of the key area in the second format.
In a second aspect, an embodiment of the present invention further provides a three-dimensional modeling fusion apparatus, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of sampling images and establishing a real three-dimensional model of a target object in a first format according to the plurality of sampling images;
the second acquisition module is used for acquiring multiple key area pictures of the sampling image and establishing a three-dimensional model of the key area in a second format according to the key area pictures, wherein the second format and the first format are different in format;
and the processing module is used for fusing the real-scene three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object.
Further, the processing module is specifically configured to delete a key area in the live-action three-dimensional model to obtain a remaining live-action three-dimensional model; and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional models of the remaining live-action scenes are triangulation network models;
the processing module is specifically configured to splice the three-dimensional model of the key region to the remaining live-action three-dimensional models, and fuse the three-dimensional model of the key region and the edges of the remaining live-action three-dimensional models to obtain an optimized model of the target object.
Further, the first obtaining module is specifically configured to obtain a plurality of sampling images, add the plurality of sampling images into a coordinate system of a control point, and obtain external orientation elements of the sampling images through space-three encryption operation, where the external orientation elements of the sampling images are target object poses; generating a white mode of the target object according to the sampling image exterior orientation element; then according to an image matching algorithm, the homonymous points of a plurality of sampling images are obtained; generating target objects corresponding to the sampling images according to the homonymous points of the sampling images; calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Further, the second obtaining module is specifically configured to obtain multiple pictures of a key region in the sampling image, and generate a contour line of the key region in each sampling image;
generating a white mould of the key area according to the contour line; and mapping the texture information of the key area to a white model of the key area to generate the three-dimensional model of the key area in the second format.
The invention has the beneficial effects that:
the three-dimensional modeling fusion method provided by the invention comprises the following steps: acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images. Acquiring multiple key area pictures of the sampling image, and establishing a three-dimensional model of the key area in a second format according to the key area pictures, wherein the second format and the first format are different in format. And then fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object. The three-dimensional modeling fusion method provided by the invention only needs to finely model the picture of the key area, can realize the integration of models with two different formats, and has low modeling cost and high rate.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
Fig. 1 is a first flowchart of a three-dimensional modeling fusion method provided by the present application. As shown in fig. 1, the present invention provides a three-dimensional modeling fusion method, including:
s110: acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images.
The multiple sampling images acquired by the embodiment can be obtained by oblique photography, and the oblique photography technology is a high and new technology developed in the international surveying and mapping field in recent years, and is to mount multiple cameras on the same aircraft, acquire images from different angles such as vertical and oblique angles and acquire different acquired images. For example, when the aircraft flies horizontally, one camera device is parallel to the ground, and other camera devices form a certain angle with the ground, so that different collected images are obtained.
When the aircraft collects the terrain and landform images, the collected images are stored in a memory of the aircraft, and after the image collection is completed, a plurality of sampling images are guided out of the memory to an application terminal for processing. The application terminal may be a desktop computer, a notebook computer, a tablet computer, or a mobile phone, and the specific terminal form is not limited herein.
And further, after the exported collected image is processed by corresponding software, a live-action three-dimensional model of the landform and the landform shot by the aircraft is obtained, and the live-action three-dimensional model is exported into a first format. The corresponding software may be photoscan, PhotoMesh, contextcaptureholder, etc., wherein the first format may be xlm, klm, osbg format, etc. The specific application software and export format are not limited in this embodiment, as long as a plurality of collected images can be processed into a live-action three-dimensional model.
S120: and acquiring a plurality of key area pictures of the sampling image, and establishing a three-dimensional model of the key area in a second format according to the key area pictures.
Because the live-action three-dimensional model obtained through the processing of S110 does not have a singleization function, which is not favorable for later-stage application expansion and hanging attributes, information contained in the live-action three-dimensional model needs to be finely processed, and the information includes: buildings, terrain, greenery, street lights, other things, and the like. If all the information contained in the live-action three-dimensional model is processed, the problems of long construction period cost, resource waste and the like are caused. For some projects, only the key areas need to be subjected to fine three-dimensional modeling, so that the time is shortened, and the cost is reduced.
Therefore, in the embodiment, only the key area is subjected to fine modeling, before the fine modeling, the picture of the key area is acquired by manually taking pictures for field work, and the picture of the key area is imported into corresponding software for processing, so that a fine three-dimensional model of the key area is obtained and exported to be in the second format.
It should be noted that, in the three-dimensional modeling method provided in this embodiment, the first format is different from the second format. And, the software for performing the fine three-dimensional modeling on the barycentric region may be DP-Moderler, 3d max software, etc., wherein the second format may be obg, dwg, iges, etc. The specific application software for fine modeling and the second format are not limited in this embodiment, as long as a fine three-dimensional model of a key area can be established.
S130: and fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object.
It should be noted that, in the three-dimensional model fusing the live-action three-dimensional model and the three-dimensional model of the key area provided in this embodiment, the software for optimizing the live-action three-dimensional model and the three-dimensional model of the key area may be compatible with the first format and the second format, for example, may be hyper map software (SuperMap), three-dimensional software (Skyline) software, and the specific compatible format is not limited in this embodiment.
The three-dimensional modeling fusion method provided by this embodiment is to establish a live-action three-dimensional model of a target object, perform fine three-dimensional modeling on a picture of a key region, and fuse the obtained live-action three-dimensional model and the three-dimensional model of the key region. The method has the advantages of high efficiency, high reality degree, high precision, low cost, fineness, attractiveness, capability of being edited in a single body, easiness in later-stage application expansion and hanging attribute, low cost, high efficiency and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application.
Fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object, wherein the method comprises the following steps:
s210: and deleting the key area in the live-action three-dimensional model to obtain the residual live-action three-dimensional model.
And carrying out module division on the obtained real three-dimensional model, and deleting the area needing to establish the fine three-dimensional model in the real three-dimensional model to obtain the rest real three-dimensional model.
S220: and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional models of the remaining real scenes are triangulation network models.
The triangular net is a form of laying horizontal control nets, and is formed by connecting a plurality of triangles and used for representing the undulating situation of the ground. By collecting discrete point data of the ground, a triangulation network model is generated so as to simulate a terrain model of an aircraft shooting area, and therefore a user can conveniently analyze features such as landform and topography according to the model.
Splicing the three-dimensional model of the key area to the remaining three-dimensional models of the live-action scene, and fusing the three-dimensional model of the key area and the edges of the remaining three-dimensional models of the live-action scene to obtain an optimized model of the target object, wherein the method comprises the following steps: and splicing the three-dimensional model of the key area to the residual real scene three-dimensional model.
The specific mode is that edge triangular points of the three-dimensional model of the key area and edge triangular points of the remaining live-action three-dimensional model are fused to obtain the optimized model of the target object.
It should be noted that when the area of the real-scene three-dimensional model that needs to be built is deleted, the edge of the remaining real-scene three-dimensional model may be uneven, for example, the road in the image may become uneven, and some buildings may have a half phenomenon, at this time, the edge needs to be connected, and the buildings on the edge are flattened, so that the fusion of the real-scene three-dimensional model and the key-area three-dimensional model is realized, and the optimized target object model is obtained.
As shown in fig. 3, fig. 3 is a third schematic flow chart of a three-dimensional modeling fusion method provided by the present application.
Optionally, this embodiment provides a three-dimensional modeling fusion method, where a process of obtaining a plurality of sampling images and establishing a live-action three-dimensional model of a target object in a first format according to the plurality of sampling images may be performed in a contextcapturecontainer modeling system, where the first format is, for example, an osbg (open space plug) format. The specific process is as follows:
s310: acquiring a plurality of sampling images, adding the plurality of sampling images into a coordinate system of a control point, and performing space-three encryption operation to obtain a plurality of external orientation elements of the sampling images, wherein the external orientation elements of the sampling images are image postures.
The space-time-space-three-encryption operation process comprises the steps of loading a plurality of sampling images and control points in a ContextCaptureMeter modeling system, calling sub-software (such as HANGF software) to adopt integral adjustment of a beam method area network, taking a beam of light consisting of one image as an adjustment unit, taking a collinear equation of central projection as a basic equation of the adjustment unit, enabling public light among models to achieve optimal intersection through rotation and translation of each light beam in space, and optimally adding the integral area into a control point coordinate system, so that the spatial position relation among ground objects is restored, and further a plurality of sampling image external orientation elements can be obtained.
S320: and generating a white mode of the target object according to the sampling image exterior orientation element.
S330: and then according to an image matching algorithm, obtaining the homonymous points of the plurality of sampling images.
And automatically matching homonymous points of the multiple sampled images in a ContextCaptureMeter modeling system according to a high-precision image matching algorithm, wherein the homonymous points are the same parts in the multiple sampled images, and extracting more characteristic points from the images to form dense point clouds, so that the details of the ground features are expressed more accurately. The more complex the ground features, the denser the building, the higher the density of points, and conversely, the relative sparseness.
S340: and generating target objects corresponding to the plurality of sampling images according to the homonymous points of the plurality of sampling images.
After the homonymous points influenced by the multiple sampling are matched, the multiple sampling images can be integrated into a complete target object model.
S350: calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Further, the obtained live-action three-dimensional model is exported to osbg format in a ContextCaptureMeter modeling system.
As shown in fig. 4, fig. 4 is a third schematic flow chart of a three-dimensional modeling fusion method provided by the present application.
Optionally, this embodiment provides a three-dimensional modeling fusion method, where a process of obtaining images of a key region in a plurality of sampling images and establishing a three-dimensional model of the key region in a second format according to the images of the key region may be performed in DP-modeler software, where the second format takes the deriving obg format as an example, and the specific process is as follows:
s410: acquiring a plurality of pictures of the key areas in the sampling images, and generating the contour line of the key area in each sampling image.
It should be noted that the picture of the key area in the sampling image obtained in this embodiment is realized by taking a picture manually in the field. And the number of the pictures acquired by the field is artificially and subjectively determined, and the part is considered as a key area and needs to be subjected to fine modeling, so that the area is manually photographed, and the photographed pictures are imported into DP-Moderler software to generate the contour line of the key area.
S420: and generating a white mould of the key area according to the contour line.
And operating the DP-Modelr software to automatically generate a white mode corresponding to the key area.
S430: and mapping the texture information of the key area to a white model of the key area to generate the three-dimensional model of the key area in the second format.
The three-dimensional model of the key area can be obtained through the process, and the obtained three-dimensional model of the key area is exported into obg format in DP-Moderler software.
Further, the system for integrating the live-action three-dimensional model and the key-area three-dimensional model used in this embodiment is a hypergraph system (SuperMap), which is compatible with the osbg format and the obg format, and does not need to convert the two different types of formats into a uniform format, thereby improving the processing efficiency.
Fig. 5 is a schematic diagram of a three-dimensional modeling fusion apparatus provided in the present application. As shown in fig. 5, the apparatus specifically includes: a first obtaining module 501, a second obtaining module 502 and a processing module 503. Wherein,
the first obtaining module 501 is configured to obtain a plurality of sampling images, and establish a live-action three-dimensional model of a target object in a first format according to the plurality of sampling images.
The second obtaining module 502 is configured to obtain the key region pictures of the multiple sampling images, and establish a three-dimensional model of the key region in a second format according to the key region pictures, where the second format is different from the first format.
The processing module 503 is configured to fuse the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object.
Optionally, the processing module 503 is specifically configured to delete a key area in the live-action three-dimensional model to obtain a remaining live-action three-dimensional model. And splicing the three-dimensional model of the key area to the remaining three-dimensional models of the live-action scenes, and fusing the three-dimensional model of the key area and the edges of the remaining three-dimensional models of the live-action scenes to obtain an optimized model of the target object.
Optionally, the three-dimensional model of the key area and the remaining three-dimensional models of the live-action are triangulation models. The processing module 503 is further configured to specifically splice the three-dimensional model of the key region to the remaining three-dimensional models of the real scenes, and fuse the three-dimensional model of the key region and the edges of the remaining three-dimensional models of the real scenes to obtain an optimized model of the target object,
optionally, the first obtaining module 501 is specifically configured to obtain multiple sampling images, add the multiple sampling images into a coordinate system of a control point, and obtain external orientation elements of the sampling images through space-three encryption operation, where the external orientation elements of the sampling images are target object poses. And generating a white mode of the target object according to the external orientation elements of the sampled image. And then according to an image matching algorithm, obtaining the homonymous points of the multiple sampled images. And generating target objects corresponding to the plurality of sampling images according to the homonymous points of the plurality of sampling images. Calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Optionally, the second obtaining module 502 is specifically configured to obtain pictures of a key region in a plurality of sampling images, and generate a contour line of the key region in each sampling image. And generating a white mould of the key area according to the contour line. And mapping the texture information of the key area to a white model of the key area to generate a three-dimensional model of the key area in a second format.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 6 is a schematic diagram of a three-dimensional modeling fusion apparatus according to an embodiment of the present application. The apparatus may be integrated in a terminal device or a chip of the terminal device, and the terminal may be a computing device with an image processing function.
The device includes: memory 601, processor 602.
The memory 601 is used for storing programs, and the processor 602 calls the programs stored in the memory 601 to execute the above method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the invention also provides a program product, for example a computer-readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.