Movatterモバイル変換


[0]ホーム

URL:


CN109754463A - Three-dimensional modeling fusion method and device - Google Patents

Three-dimensional modeling fusion method and device
Download PDF

Info

Publication number
CN109754463A
CN109754463ACN201910025548.4ACN201910025548ACN109754463ACN 109754463 ACN109754463 ACN 109754463ACN 201910025548 ACN201910025548 ACN 201910025548ACN 109754463 ACN109754463 ACN 109754463A
Authority
CN
China
Prior art keywords
dimensional model
key area
target object
dimensional
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910025548.4A
Other languages
Chinese (zh)
Other versions
CN109754463B (en
Inventor
潘雅静
车登科
马文斌
郑睿博
郭瑞隆
杜文志
何文元
王文敏
雷军龙
程文瑶
周颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Coal (xi'an) Underground Space Technology Development Co Ltd
China Coal Survey & Remote Sensing Group Co Ltd
Original Assignee
China Coal (xi'an) Underground Space Technology Development Co Ltd
China Coal Survey & Remote Sensing Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Coal (xi'an) Underground Space Technology Development Co Ltd, China Coal Survey & Remote Sensing Group Co LtdfiledCriticalChina Coal (xi'an) Underground Space Technology Development Co Ltd
Priority to CN201910025548.4ApriorityCriticalpatent/CN109754463B/en
Publication of CN109754463ApublicationCriticalpatent/CN109754463A/en
Application grantedgrantedCritical
Publication of CN109754463BpublicationCriticalpatent/CN109754463B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The present invention provides a kind of three-dimensional modeling fusion method and device, is related to three-dimensional geographic information technical field.The three-dimensional modeling fusion method, comprising: obtain multiple sampling images, and the outdoor scene threedimensional model for establishing the target object of the first format according to image is sampled described in multiple.Key area pictures of multiple sampling images are obtained, and establish according to the key area picture threedimensional model of the key area of the second format, wherein second format and first format are different-format.The outdoor scene threedimensional model of the target object and the threedimensional model of the key area are merged again, the model of the target object after being optimized.It realizes and only needs to carry out fine modeling to the picture of key area, while can realize the integration to two kinds of different-format models, modeling cost is low and high-efficient.

Description

Three-dimensional modeling fusion method and device
Technical Field
The invention relates to the technical field of three-dimensional geographic information, in particular to a three-dimensional modeling fusion method and device.
Background
The oblique photography technology is a high and new technology developed in the international surveying and mapping field in recent years, breaks through the limitation that the prior normal shooting can only be carried out from a vertical angle, and by carrying a plurality of sensors on the same flight platform and simultaneously acquiring images from five different angles such as a vertical angle, four oblique angles and the like, abundant top textures and side textures of a building are rapidly obtained, the surrounding conditions of ground objects are truly reflected, and a real intuitive world which accords with human vision is introduced into a user.
The oblique photography technology in the prior art has the advantages of high efficiency, high precision, high reality degree and low cost. And when the oblique photography modeling is carried out, all oblique photography models are used for modeling, and the scene is attractive and delicate.
However, for some projects, only three-dimensional modeling needs to be performed in the central key area, and if fine three-dimensional modeling is performed on all scenes based on oblique photography model modeling, problems such as high modeling cost and low modeling efficiency are caused.
Disclosure of Invention
The invention aims to provide a three-dimensional modeling fusion method and device aiming at the defects in the prior art, so as to solve the problems of high modeling cost and low modeling efficiency in the prior art.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a three-dimensional modeling fusion method, including: acquiring a plurality of sampling images, and establishing a live-action three-dimensional model of a target object in a first format according to the plurality of sampling images;
acquiring multiple key area pictures of the sampling image, and establishing a three-dimensional model of the key area in a second format according to the key area pictures, wherein the second format and the first format are different in format;
and fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object.
Further, the fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object includes:
deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model;
and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional models of the remaining live-action scenes are triangulation network models;
the splicing the three-dimensional model of the key area to the remaining live-action three-dimensional model and fusing the three-dimensional model of the key area and the edges of the remaining live-action three-dimensional model to obtain the optimized model of the target object includes:
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model;
and fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the remaining live-action three-dimensional models to obtain the optimized model of the target object.
Further, the acquiring a plurality of sampling images and establishing a live-action three-dimensional model of the target object in the first format according to the plurality of sampling images includes:
acquiring a plurality of sampling images, adding the plurality of sampling images into a coordinate system of a control point, and performing space-three encryption operation to obtain a plurality of sampling image external orientation elements, wherein the sampling image external orientation elements are image postures;
generating a white mode of the target object according to the sampling image exterior orientation element;
then according to an image matching algorithm, the homonymous points of a plurality of sampling images are obtained;
generating target objects corresponding to the sampling images according to the homonymous points of the sampling images;
calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Further, the acquiring of the multiple images of the key area in the sampling image and the establishing of the three-dimensional model of the key area in the second format according to the images of the key area include:
acquiring pictures of key areas in a plurality of sampling images and generating contour lines of the key areas in each sampling image;
generating a white mould of the key area according to the contour line;
and mapping the texture information of the key area to a white model of the key area to generate the three-dimensional model of the key area in the second format.
In a second aspect, an embodiment of the present invention further provides a three-dimensional modeling fusion apparatus, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a plurality of sampling images and establishing a real three-dimensional model of a target object in a first format according to the plurality of sampling images;
the second acquisition module is used for acquiring multiple key area pictures of the sampling image and establishing a three-dimensional model of the key area in a second format according to the key area pictures, wherein the second format and the first format are different in format;
and the processing module is used for fusing the real-scene three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object.
Further, the processing module is specifically configured to delete a key area in the live-action three-dimensional model to obtain a remaining live-action three-dimensional model; and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional models of the remaining live-action scenes are triangulation network models;
the processing module is specifically configured to splice the three-dimensional model of the key region to the remaining live-action three-dimensional models, and fuse the three-dimensional model of the key region and the edges of the remaining live-action three-dimensional models to obtain an optimized model of the target object.
Further, the first obtaining module is specifically configured to obtain a plurality of sampling images, add the plurality of sampling images into a coordinate system of a control point, and obtain external orientation elements of the sampling images through space-three encryption operation, where the external orientation elements of the sampling images are target object poses; generating a white mode of the target object according to the sampling image exterior orientation element; then according to an image matching algorithm, the homonymous points of a plurality of sampling images are obtained; generating target objects corresponding to the sampling images according to the homonymous points of the sampling images; calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Further, the second obtaining module is specifically configured to obtain multiple pictures of a key region in the sampling image, and generate a contour line of the key region in each sampling image;
generating a white mould of the key area according to the contour line; and mapping the texture information of the key area to a white model of the key area to generate the three-dimensional model of the key area in the second format.
The invention has the beneficial effects that:
the three-dimensional modeling fusion method provided by the invention comprises the following steps: acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images. Acquiring multiple key area pictures of the sampling image, and establishing a three-dimensional model of the key area in a second format according to the key area pictures, wherein the second format and the first format are different in format. And then fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object. The three-dimensional modeling fusion method provided by the invention only needs to finely model the picture of the key area, can realize the integration of models with two different formats, and has low modeling cost and high rate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a first flowchart of a three-dimensional modeling fusion method provided by the present application;
FIG. 2 is a schematic flow chart diagram of a three-dimensional modeling fusion method provided by the present application;
fig. 3 is a schematic flow chart of a three-dimensional modeling fusion method provided by the present application;
fig. 4 is a schematic flow chart of a three-dimensional modeling fusion method provided by the present application;
FIG. 5 is a schematic diagram of a three-dimensional modeling fusion apparatus provided in the present application;
fig. 6 is a schematic diagram of a three-dimensional modeling fusion apparatus provided by the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
Fig. 1 is a first flowchart of a three-dimensional modeling fusion method provided by the present application. As shown in fig. 1, the present invention provides a three-dimensional modeling fusion method, including:
s110: acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images.
The multiple sampling images acquired by the embodiment can be obtained by oblique photography, and the oblique photography technology is a high and new technology developed in the international surveying and mapping field in recent years, and is to mount multiple cameras on the same aircraft, acquire images from different angles such as vertical and oblique angles and acquire different acquired images. For example, when the aircraft flies horizontally, one camera device is parallel to the ground, and other camera devices form a certain angle with the ground, so that different collected images are obtained.
When the aircraft collects the terrain and landform images, the collected images are stored in a memory of the aircraft, and after the image collection is completed, a plurality of sampling images are guided out of the memory to an application terminal for processing. The application terminal may be a desktop computer, a notebook computer, a tablet computer, or a mobile phone, and the specific terminal form is not limited herein.
And further, after the exported collected image is processed by corresponding software, a live-action three-dimensional model of the landform and the landform shot by the aircraft is obtained, and the live-action three-dimensional model is exported into a first format. The corresponding software may be photoscan, PhotoMesh, contextcaptureholder, etc., wherein the first format may be xlm, klm, osbg format, etc. The specific application software and export format are not limited in this embodiment, as long as a plurality of collected images can be processed into a live-action three-dimensional model.
S120: and acquiring a plurality of key area pictures of the sampling image, and establishing a three-dimensional model of the key area in a second format according to the key area pictures.
Because the live-action three-dimensional model obtained through the processing of S110 does not have a singleization function, which is not favorable for later-stage application expansion and hanging attributes, information contained in the live-action three-dimensional model needs to be finely processed, and the information includes: buildings, terrain, greenery, street lights, other things, and the like. If all the information contained in the live-action three-dimensional model is processed, the problems of long construction period cost, resource waste and the like are caused. For some projects, only the key areas need to be subjected to fine three-dimensional modeling, so that the time is shortened, and the cost is reduced.
Therefore, in the embodiment, only the key area is subjected to fine modeling, before the fine modeling, the picture of the key area is acquired by manually taking pictures for field work, and the picture of the key area is imported into corresponding software for processing, so that a fine three-dimensional model of the key area is obtained and exported to be in the second format.
It should be noted that, in the three-dimensional modeling method provided in this embodiment, the first format is different from the second format. And, the software for performing the fine three-dimensional modeling on the barycentric region may be DP-Moderler, 3d max software, etc., wherein the second format may be obg, dwg, iges, etc. The specific application software for fine modeling and the second format are not limited in this embodiment, as long as a fine three-dimensional model of a key area can be established.
S130: and fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain the optimized model of the target object.
It should be noted that, in the three-dimensional model fusing the live-action three-dimensional model and the three-dimensional model of the key area provided in this embodiment, the software for optimizing the live-action three-dimensional model and the three-dimensional model of the key area may be compatible with the first format and the second format, for example, may be hyper map software (SuperMap), three-dimensional software (Skyline) software, and the specific compatible format is not limited in this embodiment.
The three-dimensional modeling fusion method provided by this embodiment is to establish a live-action three-dimensional model of a target object, perform fine three-dimensional modeling on a picture of a key region, and fuse the obtained live-action three-dimensional model and the three-dimensional model of the key region. The method has the advantages of high efficiency, high reality degree, high precision, low cost, fineness, attractiveness, capability of being edited in a single body, easiness in later-stage application expansion and hanging attribute, low cost, high efficiency and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application.
Fusing the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object, wherein the method comprises the following steps:
s210: and deleting the key area in the live-action three-dimensional model to obtain the residual live-action three-dimensional model.
And carrying out module division on the obtained real three-dimensional model, and deleting the area needing to establish the fine three-dimensional model in the real three-dimensional model to obtain the rest real three-dimensional model.
S220: and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional models of the remaining real scenes are triangulation network models.
The triangular net is a form of laying horizontal control nets, and is formed by connecting a plurality of triangles and used for representing the undulating situation of the ground. By collecting discrete point data of the ground, a triangulation network model is generated so as to simulate a terrain model of an aircraft shooting area, and therefore a user can conveniently analyze features such as landform and topography according to the model.
Splicing the three-dimensional model of the key area to the remaining three-dimensional models of the live-action scene, and fusing the three-dimensional model of the key area and the edges of the remaining three-dimensional models of the live-action scene to obtain an optimized model of the target object, wherein the method comprises the following steps: and splicing the three-dimensional model of the key area to the residual real scene three-dimensional model.
The specific mode is that edge triangular points of the three-dimensional model of the key area and edge triangular points of the remaining live-action three-dimensional model are fused to obtain the optimized model of the target object.
It should be noted that when the area of the real-scene three-dimensional model that needs to be built is deleted, the edge of the remaining real-scene three-dimensional model may be uneven, for example, the road in the image may become uneven, and some buildings may have a half phenomenon, at this time, the edge needs to be connected, and the buildings on the edge are flattened, so that the fusion of the real-scene three-dimensional model and the key-area three-dimensional model is realized, and the optimized target object model is obtained.
As shown in fig. 3, fig. 3 is a third schematic flow chart of a three-dimensional modeling fusion method provided by the present application.
Optionally, this embodiment provides a three-dimensional modeling fusion method, where a process of obtaining a plurality of sampling images and establishing a live-action three-dimensional model of a target object in a first format according to the plurality of sampling images may be performed in a contextcapturecontainer modeling system, where the first format is, for example, an osbg (open space plug) format. The specific process is as follows:
s310: acquiring a plurality of sampling images, adding the plurality of sampling images into a coordinate system of a control point, and performing space-three encryption operation to obtain a plurality of external orientation elements of the sampling images, wherein the external orientation elements of the sampling images are image postures.
The space-time-space-three-encryption operation process comprises the steps of loading a plurality of sampling images and control points in a ContextCaptureMeter modeling system, calling sub-software (such as HANGF software) to adopt integral adjustment of a beam method area network, taking a beam of light consisting of one image as an adjustment unit, taking a collinear equation of central projection as a basic equation of the adjustment unit, enabling public light among models to achieve optimal intersection through rotation and translation of each light beam in space, and optimally adding the integral area into a control point coordinate system, so that the spatial position relation among ground objects is restored, and further a plurality of sampling image external orientation elements can be obtained.
S320: and generating a white mode of the target object according to the sampling image exterior orientation element.
S330: and then according to an image matching algorithm, obtaining the homonymous points of the plurality of sampling images.
And automatically matching homonymous points of the multiple sampled images in a ContextCaptureMeter modeling system according to a high-precision image matching algorithm, wherein the homonymous points are the same parts in the multiple sampled images, and extracting more characteristic points from the images to form dense point clouds, so that the details of the ground features are expressed more accurately. The more complex the ground features, the denser the building, the higher the density of points, and conversely, the relative sparseness.
S340: and generating target objects corresponding to the plurality of sampling images according to the homonymous points of the plurality of sampling images.
After the homonymous points influenced by the multiple sampling are matched, the multiple sampling images can be integrated into a complete target object model.
S350: calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Further, the obtained live-action three-dimensional model is exported to osbg format in a ContextCaptureMeter modeling system.
As shown in fig. 4, fig. 4 is a third schematic flow chart of a three-dimensional modeling fusion method provided by the present application.
Optionally, this embodiment provides a three-dimensional modeling fusion method, where a process of obtaining images of a key region in a plurality of sampling images and establishing a three-dimensional model of the key region in a second format according to the images of the key region may be performed in DP-modeler software, where the second format takes the deriving obg format as an example, and the specific process is as follows:
s410: acquiring a plurality of pictures of the key areas in the sampling images, and generating the contour line of the key area in each sampling image.
It should be noted that the picture of the key area in the sampling image obtained in this embodiment is realized by taking a picture manually in the field. And the number of the pictures acquired by the field is artificially and subjectively determined, and the part is considered as a key area and needs to be subjected to fine modeling, so that the area is manually photographed, and the photographed pictures are imported into DP-Moderler software to generate the contour line of the key area.
S420: and generating a white mould of the key area according to the contour line.
And operating the DP-Modelr software to automatically generate a white mode corresponding to the key area.
S430: and mapping the texture information of the key area to a white model of the key area to generate the three-dimensional model of the key area in the second format.
The three-dimensional model of the key area can be obtained through the process, and the obtained three-dimensional model of the key area is exported into obg format in DP-Moderler software.
Further, the system for integrating the live-action three-dimensional model and the key-area three-dimensional model used in this embodiment is a hypergraph system (SuperMap), which is compatible with the osbg format and the obg format, and does not need to convert the two different types of formats into a uniform format, thereby improving the processing efficiency.
Fig. 5 is a schematic diagram of a three-dimensional modeling fusion apparatus provided in the present application. As shown in fig. 5, the apparatus specifically includes: a first obtaining module 501, a second obtaining module 502 and a processing module 503. Wherein,
the first obtaining module 501 is configured to obtain a plurality of sampling images, and establish a live-action three-dimensional model of a target object in a first format according to the plurality of sampling images.
The second obtaining module 502 is configured to obtain the key region pictures of the multiple sampling images, and establish a three-dimensional model of the key region in a second format according to the key region pictures, where the second format is different from the first format.
The processing module 503 is configured to fuse the live-action three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object.
Optionally, the processing module 503 is specifically configured to delete a key area in the live-action three-dimensional model to obtain a remaining live-action three-dimensional model. And splicing the three-dimensional model of the key area to the remaining three-dimensional models of the live-action scenes, and fusing the three-dimensional model of the key area and the edges of the remaining three-dimensional models of the live-action scenes to obtain an optimized model of the target object.
Optionally, the three-dimensional model of the key area and the remaining three-dimensional models of the live-action are triangulation models. The processing module 503 is further configured to specifically splice the three-dimensional model of the key region to the remaining three-dimensional models of the real scenes, and fuse the three-dimensional model of the key region and the edges of the remaining three-dimensional models of the real scenes to obtain an optimized model of the target object,
optionally, the first obtaining module 501 is specifically configured to obtain multiple sampling images, add the multiple sampling images into a coordinate system of a control point, and obtain external orientation elements of the sampling images through space-three encryption operation, where the external orientation elements of the sampling images are target object poses. And generating a white mode of the target object according to the external orientation elements of the sampled image. And then according to an image matching algorithm, obtaining the homonymous points of the multiple sampled images. And generating target objects corresponding to the plurality of sampling images according to the homonymous points of the plurality of sampling images. Calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
Optionally, the second obtaining module 502 is specifically configured to obtain pictures of a key region in a plurality of sampling images, and generate a contour line of the key region in each sampling image. And generating a white mould of the key area according to the contour line. And mapping the texture information of the key area to a white model of the key area to generate a three-dimensional model of the key area in a second format.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 6 is a schematic diagram of a three-dimensional modeling fusion apparatus according to an embodiment of the present application. The apparatus may be integrated in a terminal device or a chip of the terminal device, and the terminal may be a computing device with an image processing function.
The device includes: memory 601, processor 602.
The memory 601 is used for storing programs, and the processor 602 calls the programs stored in the memory 601 to execute the above method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the invention also provides a program product, for example a computer-readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

9. The three-dimensional modeling fusion device according to any one of claims 6 to 8, wherein the first obtaining module is specifically configured to obtain a plurality of sampling images, add the plurality of sampling images to a coordinate system of a control point, and obtain the external orientation element of the sampling image through a space-three encryption operation, where the external orientation element of the sampling image is a target object posture; generating a white mode of the target object according to the sampling image exterior orientation element; then according to an image matching algorithm, the homonymous points of a plurality of sampling images are obtained; generating target objects corresponding to the sampling images according to the homonymous points of the sampling images; calculating texture information of the target object, and mapping the texture information to a white mold of the target object to obtain a real three-dimensional model of the target object in the first format.
CN201910025548.4A2019-01-112019-01-11Three-dimensional modeling fusion method and deviceActiveCN109754463B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910025548.4ACN109754463B (en)2019-01-112019-01-11Three-dimensional modeling fusion method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910025548.4ACN109754463B (en)2019-01-112019-01-11Three-dimensional modeling fusion method and device

Publications (2)

Publication NumberPublication Date
CN109754463Atrue CN109754463A (en)2019-05-14
CN109754463B CN109754463B (en)2023-05-23

Family

ID=66405459

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910025548.4AActiveCN109754463B (en)2019-01-112019-01-11Three-dimensional modeling fusion method and device

Country Status (1)

CountryLink
CN (1)CN109754463B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111681322A (en)*2020-06-122020-09-18中国测绘科学研究院 A Fusion Method of Oblique Photographic Models
CN111915739A (en)*2020-08-132020-11-10广东申义实业投资有限公司Real-time three-dimensional panoramic information interactive information system
WO2021184933A1 (en)*2020-03-202021-09-23华为技术有限公司Three-dimensional human body model reconstruction method
CN114170273A (en)*2021-12-082022-03-11南方电网电力科技股份有限公司 A kind of target tracking method and related device based on binocular camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106611441A (en)*2015-10-272017-05-03腾讯科技(深圳)有限公司Processing method and device for three-dimensional map
US9767566B1 (en)*2014-09-032017-09-19Sprint Communications Company L.P.Mobile three-dimensional model creation platform and methods
CN108665536A (en)*2018-05-142018-10-16广州市城市规划勘测设计研究院Three-dimensional and live-action data method for visualizing, device and computer readable storage medium
CN108919944A (en)*2018-06-062018-11-30成都中绳科技有限公司A kind of virtual roaming method carrying out data lossless interaction in display end based on digital city model realization
CN109118581A (en)*2018-08-222019-01-01Oppo广东移动通信有限公司Image processing method and device, electronic equipment, computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9767566B1 (en)*2014-09-032017-09-19Sprint Communications Company L.P.Mobile three-dimensional model creation platform and methods
CN106611441A (en)*2015-10-272017-05-03腾讯科技(深圳)有限公司Processing method and device for three-dimensional map
CN108665536A (en)*2018-05-142018-10-16广州市城市规划勘测设计研究院Three-dimensional and live-action data method for visualizing, device and computer readable storage medium
CN108919944A (en)*2018-06-062018-11-30成都中绳科技有限公司A kind of virtual roaming method carrying out data lossless interaction in display end based on digital city model realization
CN109118581A (en)*2018-08-222019-01-01Oppo广东移动通信有限公司Image processing method and device, electronic equipment, computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张建柱: "基于Skyline的Web三维GIS开发与实现", 《中国优秀硕士学位论文全文数据库基础科学辑》*
连蓉等: "倾斜摄影与近景摄影相结合的山地城市", 《测绘通报》*

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2021184933A1 (en)*2020-03-202021-09-23华为技术有限公司Three-dimensional human body model reconstruction method
CN113496507A (en)*2020-03-202021-10-12华为技术有限公司Human body three-dimensional model reconstruction method
CN111681322A (en)*2020-06-122020-09-18中国测绘科学研究院 A Fusion Method of Oblique Photographic Models
CN111681322B (en)*2020-06-122021-02-02中国测绘科学研究院Fusion method of oblique photography model
CN111915739A (en)*2020-08-132020-11-10广东申义实业投资有限公司Real-time three-dimensional panoramic information interactive information system
CN114170273A (en)*2021-12-082022-03-11南方电网电力科技股份有限公司 A kind of target tracking method and related device based on binocular camera

Also Published As

Publication numberPublication date
CN109754463B (en)2023-05-23

Similar Documents

PublicationPublication DateTitle
CN112927370B (en)Three-dimensional building model construction method and device, electronic equipment and storage medium
CN107862744B (en)Three-dimensional modeling method for aerial image and related product
CN109754463B (en)Three-dimensional modeling fusion method and device
CN114998536A (en)Model generation method and device based on novel basic mapping and storage medium
CN109685893B (en)Space integrated modeling method and device
WO2023280038A1 (en)Method for constructing three-dimensional real-scene model, and related apparatus
Mousavi et al.The performance evaluation of multi-image 3D reconstruction software with different sensors
JP2016537901A (en) Light field processing method
CN113409473B (en)Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN114782642B (en) Method and device for placing virtual models
CN113177975A (en)Depth calculation method and three-dimensional modeling method based on dome camera and laser radar
CN114494582A (en) A dynamic update method of 3D model based on visual perception
CN119338991A (en) Digital protection methods, systems, equipment and media for 3D reconstruction and restoration of cultural relics
CN116503538A (en)Monomer modeling method, system, terminal and storage medium based on oblique photography
CN116438581A (en) Three-dimensional point group high-density device, three-dimensional point group high-density method and program
CN116051980B (en)Building identification method, system, electronic equipment and medium based on oblique photography
WO2025077567A1 (en)Three-dimensional model output method, apparatus and device, and computer readable storage medium
JP2022518402A (en) 3D reconstruction method and equipment
CN117557466B (en)Optical remote sensing image target image enhancement method and device based on imaging conditions
CN118365818B (en) Lightweight construction method of 3D model based on oblique photography
CN117422848A (en)Method and device for segmenting three-dimensional model
Skuratovskyi et al.Outdoor mapping framework: from images to 3d model
CN116958407A (en)Method and device for acquiring virtual building model, storage medium and electronic equipment
CN113160419B (en) A method and device for establishing a building facade model
CN116912817A (en)Three-dimensional scene model splitting method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp