Disclosure of Invention
The invention aims to provide a three-dimensional modeling fusion method and device for solving the problems of high modeling cost and low modeling efficiency in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides a three-dimensional modeling fusion method, including: acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images;
acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures, wherein the second format and the first format are different formats;
and fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object.
Further, the fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object includes:
deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model;
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional model of the residual live-action are triangle network models;
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain an optimized model of the target object, wherein the method comprises the following steps:
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model;
and fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the three-dimensional model of the residual live-action to obtain the optimized model of the target object.
Further, the obtaining a plurality of sampling images, and building a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images includes:
acquiring a plurality of sampled images, adding the sampled images into a coordinate system of a control point, and obtaining a plurality of sampled image external orientation elements through space three encryption operation, wherein the sampled image external orientation elements are image gestures;
generating a white mode of the target object according to the external azimuth element of the sampling image;
obtaining homonymy points of a plurality of sampled images according to an image matching algorithm;
generating a plurality of target objects corresponding to the sampling images according to the homonymous points of the sampling images;
and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Further, the obtaining the pictures of the key areas in the plurality of sampled images, and establishing a three-dimensional model of the key areas in the second format according to the pictures of the key areas includes:
acquiring pictures of key areas in a plurality of sampled images, and generating contour lines of the key areas in each sampled image;
generating a white mode of the key area according to the contour line;
and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
In a second aspect, an embodiment of the present invention further provides a three-dimensional modeling fusion apparatus, including: the first acquisition module is used for acquiring a plurality of sampling images and establishing a real-scene three-dimensional model of the target object in a first format according to the plurality of sampling images;
the second acquisition module is used for acquiring a plurality of key region pictures of the sampled images and establishing a three-dimensional model of the key region with a second format according to the key region pictures, wherein the second format and the first format are different formats;
and the processing module is used for fusing the real three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object.
Further, the processing module is specifically configured to delete a key region in the live-action three-dimensional model to obtain a residual live-action three-dimensional model; splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional model of the residual live-action are triangle network models;
the processing module is specifically configured to splice the three-dimensional model of the key area to the remaining live-action three-dimensional model, and fuse the three-dimensional model of the key area with edges of the remaining live-action three-dimensional model to obtain an optimized model of the target object.
Further, the first obtaining module is specifically configured to obtain a plurality of sampled images, add the plurality of sampled images to a coordinate system of a control point, and obtain the sampled image external orientation element through space three encryption operation, where the sampled image external orientation element is a target object pose; generating a white mode of the target object according to the external azimuth element of the sampling image; obtaining homonymy points of a plurality of sampled images according to an image matching algorithm; generating a plurality of target objects corresponding to the sampling images according to the homonymous points of the sampling images; and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Further, the second obtaining module is specifically configured to obtain pictures of key areas in the plurality of sampled images, and generate contour lines of the key areas in each sampled image;
generating a white mode of the key area according to the contour line; and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
The beneficial effects of the invention are as follows:
the three-dimensional modeling fusion method provided by the invention comprises the following steps: and acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images. And acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures, wherein the second format and the first format are different. And fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object. The three-dimensional modeling fusion method provided by the invention only needs to finely model pictures of key areas, and can integrate two models with different formats, so that the modeling cost is low and the rate is high.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
Fig. 1 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application. As shown in fig. 1, the present invention provides a three-dimensional modeling fusion method, including:
s110: and acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images.
The multiple sampling images acquired in this embodiment may be obtained by oblique photography, and the oblique photography technology is a high-new technology developed in recent years in the field of international mapping, and is that multiple imaging devices are mounted on the same aircraft, and images are acquired from different angles such as vertical and oblique angles, so as to acquire different acquired images. For example, when the aircraft flies horizontally, one camera device is parallel to the ground, and other camera devices form a certain angle with the ground, so that different acquired images are acquired.
When the aerial vehicle collects the landform image, the collected image is stored in a memory of the aerial vehicle, and after the image collection is completed, a plurality of sampled images are exported from the memory to the application terminal for processing. The application terminal may be a desktop computer, a notebook computer, a tablet computer or a mobile phone, and the specific terminal form is not limited herein.
Further, the derived acquired images are processed through corresponding software to obtain a live-action three-dimensional model of the topography and the landform shot by the aircraft, and the live-action three-dimensional model is derived into a first format. The corresponding software may be photoscan, photoMesh, contextCaptureCenter, etc., wherein the first format may be xlm, klm, osbg format, etc. The specific application software and export format of the embodiment is not limited, as long as a plurality of acquired images can be processed into a live three-dimensional model.
S120: and acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures.
Because the live-action three-dimensional model obtained through S110 processing does not have the individuation, the later application expansion and the hooking property are not facilitated, and therefore the information contained in the live-action three-dimensional model needs to be processed finely, and the information comprises: building, terrain, green plants, street lamps, other things, etc. If all the information contained in the live-action three-dimensional model is processed, the problems of long construction period, resource waste and the like are caused. For some projects, only the precise three-dimensional modeling is needed for the heavy point area, so that the time is shortened, and the cost is reduced.
Therefore, in this embodiment, only the heavy area is finely modeled, and before the fine modeling is performed, a manual field photographing is required to collect the picture of the heavy area, and the picture of the heavy area is imported into corresponding software for processing, so as to obtain a fine three-dimensional model of the heavy area, and exported into the second format.
It should be noted that, in the three-dimensional modeling method provided in this embodiment, the first format is different from the second format. Also, the software for fine three-dimensional modeling of the heavy spot area may be DP-modeler, 3DMax software, etc., wherein the second format may be obg, dwg, iges, etc. The specific application software for fine modeling and the second format are not limited in this embodiment, as long as a fine three-dimensional model of the key region can be built.
S130: and fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object.
It should be noted that, in the three-dimensional model fusing the live-action three-dimensional model and the key region provided in this embodiment, software for optimizing the live-action three-dimensional model and the key region three-dimensional model may be compatible with the first format and the second format at the same time, for example, may be hypermap software (SuperMap) software, three-dimensional software (Skyline) software, and the specific compatible format is not limited in this embodiment.
According to the three-dimensional modeling fusion method provided by the embodiment, firstly, a live-action three-dimensional model of a target object is established, then, fine three-dimensional modeling is carried out on pictures of a heavy-point area, and finally, the obtained live-action three-dimensional model and the three-dimensional model of the heavy-point area are fused. The method has the advantages of high efficiency, high reality, high precision, low cost, fineness, attractive appearance, monomerized editing, easy application expansion and hooking property in later period, low cost, high efficiency and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application.
Fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object, wherein the method comprises the following steps:
s210: and deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model.
And carrying out module division on the obtained live-action three-dimensional model, and deleting the region needing to establish the fine three-dimensional model in the live-action three-dimensional model to obtain the residual live-action three-dimensional model.
S220: splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the residual live-action three-dimensional model are triangle network models.
The triangular net is one form of horizontal control net, and is formed by connecting a plurality of triangles and used for representing the ground relief situation. By collecting discrete point data of the ground, a triangular net model is generated so as to simulate a terrain model of an aircraft shooting area, and therefore a user can conveniently analyze characteristics such as landforms, topography and the like according to the model.
Splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain an optimized model of the target object, wherein the method comprises the following steps: and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model.
The method comprises the steps of fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the three-dimensional model of the residual live-action to obtain an optimized model of the target object.
It should be noted that when deleting the area where the fine three-dimensional model needs to be built in the live-action three-dimensional model, the edge of the remaining live-action three-dimensional model may appear uneven, for example, the road in the image may become uneven, and some buildings may have half of the phenomenon, at this time, the edge needs to be bordered, and the buildings on the edge are flattened, so that fusion of the live-action three-dimensional model and the three-dimensional model of the key area is achieved, and the optimized target object model is obtained.
As shown in fig. 3, fig. 3 is a schematic flow chart three of a three-dimensional modeling fusion method provided in the present application.
Optionally, the process of obtaining a plurality of sampling images and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images may be performed in a contextcapturementer modeling system, where the first format is exemplified by an osbg format. The specific process is as follows:
s310: and acquiring a plurality of sampled images, adding the sampled images into a coordinate system of a control point, and performing space three encryption operation to obtain a plurality of sampled image external orientation elements, wherein the sampled image external orientation elements are image gestures.
The space three encryption operation process is that a plurality of sampling images and control points are loaded in a ContextCapurementer modeling system, calling sub-software (such as HANGF software) adopts a beam method area network integral adjustment, a beam of light consisting of one image is used as an adjustment unit, a collinear equation of center projection is used as a basic equation of the adjustment unit, the best intersection of common light among models is realized through rotation and translation of each light beam in space, and the integral area is optimally added into a control point coordinate system, so that the spatial position relationship among ground objects is restored, and further, the external azimuth elements of the plurality of sampling images can be obtained.
S320: and generating a white mode of the target object according to the external azimuth element of the sampling image.
S330: and obtaining the homonymy points of the plurality of sampled images according to an image matching algorithm.
And automatically matching homonymous points of the plurality of sampling images according to a high-precision image matching algorithm in a ContextCaptureentity modeling system, wherein the homonymous points are the same parts in the plurality of sampling images, and extracting more characteristic points from the images to form a dense point cloud, so that the details of the ground object are more accurately expressed. The more complex the ground features, the denser the building, the higher the dot density, and conversely, the more sparse.
S340: and generating target objects corresponding to the plurality of sampling images according to the homonymous points of the plurality of sampling images.
After the matching of the homonymous points affected by the multiple samples is completed, the multiple sampled images can be integrated into a complete target object model.
S350: and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Further, the obtained live-action three-dimensional model is exported into an osbg format in a ContextCaptureContmenter modeling system.
As shown in fig. 4, fig. 4 is a schematic flow chart three of a three-dimensional modeling fusion method provided in the present application.
Optionally, the process of obtaining pictures of key areas in a plurality of sampled images and establishing a three-dimensional model of the key areas in a second format according to the pictures of the key areas may be performed in DP-modeler software, where the above second format is exemplified by deriving obg format, and the specific process is as follows:
s410: and acquiring pictures of key areas in a plurality of sampled images, and generating contour lines of the key areas in each sampled image.
It should be noted that, the pictures of the key areas in the sampled images obtained in this embodiment are realized by manual photographing in the field. And the quantity of the pictures collected by the field industry is manually and subjectively determined, which part is considered to be a key area needs to be subjected to fine modeling, the area is manually photographed, and the pictures obtained by photographing are imported into DP-Moderler software to generate the contour line of the key area.
S420: and generating a white mode of the key area according to the contour line.
And operating the DP-Moderler software to automatically generate a white mode corresponding to the key region.
S430: and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
The three-dimensional model of the key region can be obtained through the process, and the obtained three-dimensional model of the key region is exported in the DP-Moderler software to be obg format.
Further, the system for integrating the live-action three-dimensional model and the three-dimensional model of the key region used in the embodiment is a hypermap system (SuperMap), and the hypermap system can be compatible with the obtained osbg format and obg format at the same time, so that the two different types of formats do not need to be converted into a unified format, and the processing efficiency is improved.
Fig. 5 is a schematic diagram of a three-dimensional modeling fusion device provided in the present application. As shown in fig. 5, the apparatus specifically includes: afirst acquisition module 501, asecond acquisition module 502, and aprocessing module 503. Wherein,,
the first obtainingmodule 501 is configured to obtain a plurality of sampled images, and build a live three-dimensional model of the target object in the first format according to the plurality of sampled images.
The second obtainingmodule 502 is configured to obtain a plurality of pictures of an important area of the sampled image, and establish a three-dimensional model of the important area of the second format according to the pictures of the important area, where the second format is different from the first format.
And theprocessing module 503 is configured to fuse the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object.
Optionally, theprocessing module 503 is specifically configured to delete a key region in the live-action three-dimensional model, so as to obtain a remaining live-action three-dimensional model. Splicing the three-dimensional model of the key area to the three-dimensional model of the residual live-action, and fusing the three-dimensional model of the key area and the edges of the three-dimensional model of the residual live-action to obtain the optimized model of the target object.
Optionally, the three-dimensional model of the key area and the three-dimensional model of the residual live-action are triangle mesh models. Theprocessing module 503 is specifically further configured to splice the three-dimensional model of the key area to the remaining live-action three-dimensional model, fuse the three-dimensional model of the key area with edges of the remaining live-action three-dimensional model, obtain an optimized model of the target object,
optionally, the first obtainingmodule 501 is specifically configured to obtain a plurality of sampled images, add the plurality of sampled images to a coordinate system of a control point, and obtain an external azimuth element of the sampled image through space three encryption operation, where the external azimuth element of the sampled image is a target object pose. And generating a white mode of the target object according to the external azimuth element of the sampling image. And obtaining homonymy points of the plurality of sampling images according to an image matching algorithm. And generating target objects corresponding to the plurality of sampling images according to the homonymy points of the plurality of sampling images. And calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Optionally, the second obtainingmodule 502 is specifically configured to obtain pictures of key areas in the plurality of sampled images, and generate contour lines of the key areas in each sampled image. And generating a white mode of the key area according to the contour line. And mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 6 is a schematic diagram of a three-dimensional modeling fusion device according to an embodiment of the present application. The apparatus may be integrated in a terminal device or a chip of the terminal device, and the terminal may be a computing device having an image processing function.
The device comprises:memory 601, andprocessor 602.
Thememory 601 is used for storing a program, and theprocessor 602 calls the program stored in thememory 601 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.