Movatterモバイル変換


[0]ホーム

URL:


CN119228643B - Automatic mosaic method and device for unmanned aerial vehicle remote sensing images - Google Patents

Automatic mosaic method and device for unmanned aerial vehicle remote sensing images
Download PDF

Info

Publication number
CN119228643B
CN119228643BCN202411759537.6ACN202411759537ACN119228643BCN 119228643 BCN119228643 BCN 119228643BCN 202411759537 ACN202411759537 ACN 202411759537ACN 119228643 BCN119228643 BCN 119228643B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
remote sensing
vehicle remote
graphic representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411759537.6A
Other languages
Chinese (zh)
Other versions
CN119228643A (en
Inventor
慕号伟
朱旭东
任杰华
陈志伦
傅曙明
吴小勇
徐志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua Zhennong Information Technology Co ltd
Original Assignee
Jinhua Zhennong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua Zhennong Information Technology Co ltdfiledCriticalJinhua Zhennong Information Technology Co ltd
Priority to CN202411759537.6ApriorityCriticalpatent/CN119228643B/en
Publication of CN119228643ApublicationCriticalpatent/CN119228643A/en
Application grantedgrantedCritical
Publication of CN119228643BpublicationCriticalpatent/CN119228643B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种无人机遥感影像的自动拼图方法及装置,方法包括获取无人机遥感影像,其中,在测量区域中的预设位置设置了原图形表示,无人机对测量区域实施图像采集,得到包含与原图形表示对应的图形表示的遥感影像;从任一无人机遥感影像中提取所述图形表示,并基于所述图形表示确定坐标转换矩阵;基于所述坐标转换矩阵将所述任一无人机遥感影像的像素坐标转换为地理坐标;基于所述地理坐标,对所述任一无人机遥感影像与底图进行匹配。无需人工刺点,可直接通过识别图像中的图形表示,解析相机拍摄时位姿数据,实现多张无人机影像的自动拼接。克服了自动拼接准确性和效率不佳的缺陷。

The present invention discloses a method and device for automatically stitching together remote sensing images of unmanned aerial vehicles. The method comprises obtaining remote sensing images of unmanned aerial vehicles, wherein an original graphic representation is set at a preset position in a measurement area, and the unmanned aerial vehicle performs image acquisition on the measurement area to obtain a remote sensing image containing a graphic representation corresponding to the original graphic representation; extracting the graphic representation from any unmanned aerial vehicle remote sensing image, and determining a coordinate conversion matrix based on the graphic representation; converting the pixel coordinates of any unmanned aerial vehicle remote sensing image into geographic coordinates based on the coordinate conversion matrix; and matching any unmanned aerial vehicle remote sensing image with a base map based on the geographic coordinates. No manual puncture is required, and the automatic stitching of multiple unmanned aerial vehicle images can be realized by directly identifying the graphic representation in the image and parsing the posture data when the camera is shooting. The defects of poor accuracy and efficiency of automatic stitching are overcome.

Description

Automatic jigsaw method and device for unmanned aerial vehicle remote sensing image
Technical Field
The invention relates to the technical field of image processing, in particular to an automatic jigsaw method and device for remote sensing images of an unmanned aerial vehicle.
Background
At present, the unmanned aerial vehicle remote sensing image is widely applied in aspects of homeland resource investigation, security inspection, disaster assessment and the like. The increase of unmanned aerial vehicle image application scenes brings higher requirements on image precision, and especially in the fields of Geographic Information Systems (GIS) and photogrammetry, unmanned aerial vehicle images need to be accurately geo-registered. However, conventional geographic registration and image stitching rely on manually labeled control points, which appear to be inefficient and error-prone in large-scale image data processing. With the expansion of unmanned aerial vehicle applications, the deficiencies of manual labeling are increasingly pronounced, and more automated solutions are needed. The conventional image stitching method generally relies on ground control points (Ground Control Points, GCPs) for geometric correction and registration, and the problem is particularly remarkable when the time and effort are consumed for searching the positions of the control points in a plurality of unmanned aerial vehicle images and the area of the acquired image area is large.
Disclosure of Invention
The invention mainly aims to provide an automatic jigsaw method and device for remote sensing images of an unmanned aerial vehicle, which are used for solving the defects in the related art.
In order to achieve the above object, according to a first aspect of the present invention, there is provided an automatic jigsaw method for unmanned aerial vehicle remote sensing images, comprising obtaining unmanned aerial vehicle remote sensing images, wherein an original graphic representation is set at a preset position in a measurement area, the unmanned aerial vehicle performs image acquisition on the measurement area to obtain remote sensing images including graphic representations corresponding to the original graphic representation, extracts graphic representations from any one of the unmanned aerial vehicle remote sensing images, determines a coordinate conversion matrix based on the graphic representations, converts pixel coordinates of the any one of the unmanned aerial vehicle remote sensing images into geographic coordinates based on the coordinate conversion matrix, and matches any one of the unmanned aerial vehicle remote sensing images with a base map based on the geographic coordinates.
Optionally, before converting the pixel coordinates of any one of the unmanned aerial vehicle remote sensing images into geographic coordinates, the method further comprises determining a camera pose based on the corner coordinates of the graphic representation, and performing direction correction on any one of the unmanned aerial vehicle remote sensing images based on the camera pose.
Optionally, the graphic representation is a two-dimensional code, and the determining the coordinate transformation matrix based on the graphic representation comprises identifying the content of the graphic representation to obtain a corresponding mark number, determining the corresponding target geographic coordinate based on the mark number, and determining the coordinate transformation matrix based on the corrected pixel coordinate of the graphic representation and the target geographic coordinate.
Optionally, before determining the coordinate transformation matrix based on the graphic representation, the method further comprises screening the extracted graphic based on the original graphic representation, wherein the evaluation is performed by utilizing a preset quality evaluation rule:
,
The method comprises the steps of enabling x, y to represent pixel row and column numbers in any unmanned aerial vehicle remote sensing image, enabling x ', y' to represent pixel row and column numbers in graphic representation, enabling I to represent gray values of the unmanned aerial vehicle remote sensing image, enabling T to represent gray values at a template position, enabling R (x, y) to represent normalized cross correlation of the positions x and y, enabling 0 to represent poor quality of detection results, and enabling 1 to represent good quality of the detection results.
Optionally, the matching of the remote sensing images of any unmanned aerial vehicle with the base map based on the geographic coordinates comprises matching an initial base map from the base map based on the target geographic coordinates, and matching a target base map matched with the remote sensing images of any unmanned aerial vehicle from the initial base map based on geographic coordinates corresponding to feature points in the remote sensing images of any unmanned aerial vehicle.
Optionally, extracting the graphic representation from any one of the unmanned aerial vehicle remote sensing images comprises performing threshold detection on the any one of the unmanned aerial vehicle remote sensing images to generate two-dimensional code candidate frames, determining the graphic representation frame from the candidate frames, and performing de-duplication on the graphic representation frame to obtain the graphic representation.
According to a second aspect of the invention, an automatic jigsaw device for unmanned aerial vehicle remote sensing images is provided, and the automatic jigsaw device comprises an image acquisition unit, an image preprocessing unit and a jigsaw unit, wherein the image acquisition unit is used for acquiring unmanned aerial vehicle remote sensing images, the unmanned aerial vehicle remote sensing images are obtained by carrying out image acquisition on a designated area, a graphic representation is arranged at a preset position in the designated area, the image preprocessing unit is used for extracting the graphic representation from any unmanned aerial vehicle remote sensing image, determining a coordinate conversion matrix based on the graphic representation, the jigsaw unit is used for converting pixel coordinates of any unmanned aerial vehicle remote sensing image into geographic coordinates based on the coordinate conversion matrix, and matching any unmanned aerial vehicle remote sensing image with a base map based on the geographic coordinates.
According to a third aspect of the present invention there is provided a computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fourth aspect of the present invention there is provided an electronic device comprising at least one processor and a memory communicatively connected to the at least one processor, wherein the memory stores a computer program executable by the at least one processor to cause the at least one processor to perform the method of any of the implementations of the first aspect.
According to a fifth aspect of the present invention, there is provided a computer program which, when executed by a processor, implements the method of the first aspect.
The automatic jigsaw method and device for the unmanned aerial vehicle remote sensing image comprise the steps of obtaining the unmanned aerial vehicle remote sensing image, wherein original graphic representations are set at preset positions in a measurement area, the unmanned aerial vehicle performs image acquisition on the measurement area to obtain the remote sensing image containing graphic representations corresponding to the original graphic representations, extracting the graphic representations from any one of the unmanned aerial vehicle remote sensing images, determining a coordinate transformation matrix based on the graphic representations, transforming pixel coordinates of any one of the unmanned aerial vehicle remote sensing images into geographic coordinates based on the coordinate transformation matrix, and matching any one of the unmanned aerial vehicle remote sensing images with a base map based on the geographic coordinates. The method does not need manual stabbing, and can directly analyze pose data when the camera shoots through identifying graphic representations in the images, so that automatic splicing of a plurality of unmanned aerial vehicle images is realized. Overcomes the defect of poor accuracy and efficiency of automatic splicing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an automatic jigsaw method for remote sensing images of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is a schematic diagram of capturing remote sensing images of an unmanned aerial vehicle according to an embodiment of the present invention;
FIG. 3 is a graphical representation extraction schematic of an embodiment of the present invention;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
According to an embodiment of the present invention, an automatic jigsaw method for remote sensing images of an unmanned aerial vehicle is provided, as shown in fig. 1, including the following steps 101 to 103:
Step 101, acquiring an unmanned aerial vehicle remote sensing image, wherein the unmanned aerial vehicle remote sensing image is obtained by acquiring an image of a specified area of an unmanned aerial vehicle, and a graphic representation is set at a preset position in the specified area.
In this step, the graphical representation is arranged within the measurement area. By designing the distance and position between the graphic representations, clear resolution in the image at the unmanned aerial vehicle flight level is ensured. The flight path and altitude of the drone are planned to ensure that the images taken by the drone cover all of the graphical representations and the resolution is sufficient to identify the graphical representations. And performing radiation correction, color balance and other treatments on the image after the unmanned aerial vehicle remote sensing image is obtained.
And 102, extracting a graphic representation from any unmanned aerial vehicle remote sensing image, and determining a coordinate transformation matrix based on the graphic representation.
In this step, a graphical representation is extracted from the unmanned aerial vehicle remote sensing image, and a coordinate transformation matrix is determined based on content associated with the graphical representation, the coordinate transformation matrix being used to transform pixel coordinates into geographic coordinates.
And 103, converting pixel coordinates of the unmanned aerial vehicle remote sensing image into geographic coordinates based on the coordinate conversion matrix, and matching the unmanned aerial vehicle remote sensing image with a base map based on the geographic coordinates.
In the step, pixel coordinates in the unmanned aerial vehicle remote sensing image are converted into geographic coordinates, and the geographic coordinates based on graphic representation and the geographic coordinates of feature points in the unmanned aerial vehicle remote sensing image are matched with the base map. And after each unmanned aerial vehicle remote sensing image is matched with the base map, a spliced image which is consistent with the base map can be obtained.
As an optional implementation manner of the embodiment, before converting the pixel coordinates of any one of the unmanned aerial vehicle remote sensing images into the geographic coordinates, the method further comprises determining a camera pose based on the corner coordinates of the graphic representation, and performing direction correction on any one of the unmanned aerial vehicle remote sensing images based on the camera pose.
In this alternative implementation, after four corner coordinates of the graphic representation are detected, the rotation matrix and pose can be calculated by a geometric method and camera calibration parameters. After the camera pose is obtained, the unmanned aerial vehicle remote sensing image can be subjected to direction correction.
As an optional implementation manner of the embodiment, the graphic representation is a two-dimensional code, and the determining the coordinate transformation matrix based on the graphic representation comprises identifying the content of the graphic representation to obtain a corresponding mark number, determining the corresponding target geographic coordinate based on the mark number, and determining the coordinate transformation matrix based on the corrected pixel coordinate of the graphic representation and the target geographic coordinate.
In this alternative implementation, the graphic representation may be a two-dimensional code, which is a binary square mark, consisting of a wide black edge and an internal binary matrix, which determines the mark number of the two-dimensional code, represented by the mark ID. The two-dimensional code can also be formed by adopting colors with high contrast. The two-dimensional code can be set in the measurement area in advance, and referring to fig. 2, the unmanned aerial vehicle is used for collecting, the two-dimensional code position set in the measurement area is measured and calibrated, the two-dimensional code can be represented by the geographic coordinates of the center point of the two-dimensional code, after the image is obtained, the pixel coordinates of the two-dimensional code can be determined, and the conversion matrix can be determined based on the pixel coordinates and the geographic coordinates.
As an optional implementation manner of this embodiment, before determining the coordinate transformation matrix based on the graphic representation, the method further includes screening the extracted graphic based on the original graphic representation, wherein the evaluation is performed by using a preset quality evaluation rule, where the quality evaluation rule is:
,
The method comprises the steps of selecting a target image, wherein x and y represent pixel row and column numbers in any unmanned aerial vehicle remote sensing image, x ', y' represent pixel row and column numbers in a graphic representation, I represent gray values of the unmanned aerial vehicle remote sensing image, T represent gray values on an original graphic representation, R (x, y) represent normalized cross correlation of positions x and y, 0 represents poor quality of a detection result, and 1 represents good quality of the detection result.
In the optional implementation mode, the two-dimensional code in the unmanned aerial vehicle image is matched with the original two-dimensional code, and the two-dimensional code identification effect is evaluated. The quality evaluation mode of the two-dimensional code mark detection result is as follows:
,
x, y represents the pixel row and column numbers in the unmanned aerial vehicle image, x ', y' represents the pixel row and column numbers in the mark, I represents the gray value in the unmanned aerial vehicle image, T represents the gray value on the original image, R (x, y), the normalized cross correlation of the positions x, y, 0 represents the poor quality of the detection result, and 1 represents the good quality of the detection result.
As an optional implementation manner of the embodiment, the matching of the remote sensing image of any unmanned aerial vehicle with the base map based on the geographic coordinates includes matching the initial base map from the base map based on the target geographic coordinates, and matching the target base map matched with the remote sensing image of any unmanned aerial vehicle from the initial base map based on the geographic coordinates corresponding to the feature points in the remote sensing image of any unmanned aerial vehicle.
In this alternative implementation, the base map module is matched to the geographic location coordinates of the graphical representation based on a rough match of the geographic coordinates of the graphical representation from the base map. After rough matching, matching can be performed based on geographic position coordinates of the feature points in the image, and position points matched with the feature points in the base map are matched. After matching, matching of the image and the base map can be accurately realized.
As an optional implementation manner of the embodiment, extracting the graphic representation from any one of the unmanned aerial vehicle remote sensing images comprises the steps of carrying out threshold detection on any one of the unmanned aerial vehicle remote sensing images to generate two-dimensional code candidate frames, determining the graphic representation frame from the candidate frames, and carrying out de-duplication on the graphic representation frame to obtain the graphic representation.
In this alternative implementation, a process of extracting a two-dimensional code before extracting a camera pose is illustrated with reference to fig. 3. By way of example, a two-dimensional code candidate frame is generated in a threshold detection mode, the perimeter and the distance of the outline are calculated, a mark frame for calibrating the two-dimensional code is determined, and then the pose data of the camera can be determined based on the two-dimensional code.
According to the embodiment, the control points do not need to be manually pricked, the two-dimensional code marks in the images can be identified directly, pose data when the cameras shoot are analyzed, and meanwhile geographic coordinates of the control points are given to the mark points, so that automatic splicing of a plurality of unmanned aerial vehicle images is realized.
In the embodiment, the automatic control point marking is adopted, and in the traditional unmanned aerial vehicle image processing process, the marking of the image control point is usually completed manually, so that time and labor are consumed, and mistakes are easy to occur. After the automatic marking is adopted, the control points can be automatically identified through a computer vision technology, so that manual intervention is reduced, and the efficiency and the precision are improved. Accurate image geographic registration is realized, namely in order to generate high-precision geographic registration images or 3D models, accurate correspondence between the unmanned aerial vehicle images and ground coordinates must be ensured. The automatic matching of the control points in the image and the ground actual measurement coordinates is realized by using the automatic mark, so that the accuracy of geographic registration is ensured. The image processing efficiency is improved, namely in the traditional method, the manual marking of the control points is time-consuming and the efficiency is low when a large number of images are processed. Through automatic identification and matching, the image processing efficiency is greatly improved, and the advantages are more obvious particularly in the processing of large-range or high-resolution images. The control point consistency of the multi-view images is that in multi-view shooting, the identification and matching of the same control point in the images under different view angles are a challenge, and the same control point in different images can be automatically identified and matched through the unique two-dimensional code mark ID, so that the point consistency among the multi-view images is ensured.
The embodiment overcomes the defects of time consumption and error easiness in manual marking, and the traditional method generally requires an operator to manually mark control points in images, and the process is time-consuming and huge in workload for large-scale image data sets. The manual marking is easily influenced by experience and attention of operators, marking errors can be generated, and the matching precision of control points is reduced, so that the geographic registration precision of images is influenced. The control point identification consistency is insufficient, namely in a scene shot by the unmanned aerial vehicle at multiple angles, the problem that the positions of manually marked control points in different images are inconsistent can occur, and the point consistency among the images at multiple angles is difficult to ensure. This can affect the accuracy of the subsequent stitching and 3D modeling of the images. The flexibility of marking point layout is poor, namely the layout positions and the number of the traditional control points are often limited by the terrain conditions and the measurement method, and are difficult to flexibly adjust according to specific requirements, and the accuracy and the effect of image processing can be affected.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment of the invention also provides an automatic jigsaw device based on the unmanned aerial vehicle remote sensing image, which comprises an image acquisition unit, an image preprocessing unit, a jigsaw unit and a base map, wherein the image acquisition unit is used for acquiring the unmanned aerial vehicle remote sensing image, the unmanned aerial vehicle remote sensing image is obtained by carrying out image acquisition on a designated area for an unmanned aerial vehicle, a graphic representation is arranged at a preset position in the designated area, the image preprocessing unit is used for extracting the graphic representation from any unmanned aerial vehicle remote sensing image, determining a coordinate conversion matrix based on the graphic representation, the jigsaw unit is used for converting pixel coordinates of any unmanned aerial vehicle remote sensing image into geographic coordinates based on the coordinate conversion matrix, and the any unmanned aerial vehicle remote sensing image is matched with the base map based on the geographic coordinates.
As an optional implementation manner of the embodiment, before converting the pixel coordinates of any one of the unmanned aerial vehicle remote sensing images into the geographic coordinates, the method further comprises determining a camera pose based on the corner coordinates of the graphic representation, and performing direction correction on any one of the unmanned aerial vehicle remote sensing images based on the camera pose.
As an optional implementation manner of the embodiment, the graphic representation is a two-dimensional code, and the determining the coordinate transformation matrix based on the graphic representation comprises the steps of identifying the content of the graphic representation to obtain a corresponding mark number, determining the corresponding target geographic coordinate based on the mark number, and determining the coordinate transformation matrix based on the corrected pixel coordinate of the graphic representation and the target geographic coordinate.
As an optional implementation manner of this embodiment, the apparatus further screens the extracted graph based on an original graph representation, where the evaluation is performed by using a preset quality evaluation responsibility, and the quality evaluation rule is:
,
The method comprises the steps of enabling x, y to represent pixel row and column numbers in any unmanned aerial vehicle remote sensing image, enabling x ', y' to represent pixel row and column numbers in graphic representation, enabling I to represent gray values of the unmanned aerial vehicle remote sensing image, enabling T to represent gray values at a template position, enabling R (x, y) to represent normalized cross correlation of the positions x and y, enabling 0 to represent poor quality of detection results, and enabling 1 to represent good quality of the detection results.
As an optional implementation manner of the embodiment, the matching of the remote sensing image of any unmanned aerial vehicle with the base map based on the geographic coordinates includes matching the initial base map from the base map based on the target geographic coordinates, and matching the target base map matched with the remote sensing image of any unmanned aerial vehicle from the initial base map based on the geographic coordinates corresponding to the feature points in the remote sensing image of any unmanned aerial vehicle.
As an optional implementation manner of the embodiment, extracting the graphic representation from any one of the unmanned aerial vehicle remote sensing images comprises the steps of carrying out threshold detection on any one of the unmanned aerial vehicle remote sensing images to generate two-dimensional code candidate frames, determining the graphic representation frame from the candidate frames, and carrying out de-duplication on the graphic representation frame to obtain the graphic representation.
According to an embodiment of the present invention, there is further provided an electronic device including at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described in any of the embodiments above.
According to an embodiment of the present invention, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the method described in any of the above embodiments when executed.
According to an embodiment of the invention, the invention also provides a computer program product which, when executed by a processor, is capable of implementing the method described in any of the above embodiments.
FIG. 4 shows a schematic block diagram of an example electronic device 300 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices.
As shown in fig. 4, the electronic device 300 includes a computing unit 301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 302 or a computer program loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic device 300 may also be stored. The computing unit 301, the ROM 302, and the RAM 303 are connected to each other by a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in the electronic device 300 are connected to the I/O interface 305, including an input unit 306, such as a keyboard, mouse, etc., an output unit 307, such as various types of displays, speakers, etc., a storage unit 308, such as a magnetic disk, optical disk, etc., and a communication unit 309, such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the electronic device 300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 301 performs the respective methods and processes described above, such as an object matching method. For example, in some embodiments, the object matching method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 300 via the ROM 302 and/or the communication unit 309. When the computer program is loaded into RAM 303 and executed by computing unit 301, one or more steps of the method described above may be performed.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Claims (10)

CN202411759537.6A2024-12-032024-12-03 Automatic mosaic method and device for unmanned aerial vehicle remote sensing imagesActiveCN119228643B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411759537.6ACN119228643B (en)2024-12-032024-12-03 Automatic mosaic method and device for unmanned aerial vehicle remote sensing images

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411759537.6ACN119228643B (en)2024-12-032024-12-03 Automatic mosaic method and device for unmanned aerial vehicle remote sensing images

Publications (2)

Publication NumberPublication Date
CN119228643A CN119228643A (en)2024-12-31
CN119228643Btrue CN119228643B (en)2025-03-21

Family

ID=93947442

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411759537.6AActiveCN119228643B (en)2024-12-032024-12-03 Automatic mosaic method and device for unmanned aerial vehicle remote sensing images

Country Status (1)

CountryLink
CN (1)CN119228643B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110310248A (en)*2019-08-272019-10-08成都数之联科技有限公司A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
CN116167937A (en)*2023-02-152023-05-26中国电子科技集团公司第五十四研究所Remote sensing image correction method based on two-dimensional code recognition

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20220366605A1 (en)*2021-05-132022-11-17Seetree Systems Ltd.Accurate geolocation in remote-sensing imaging
CN114936971B (en)*2022-06-082024-11-12浙江理工大学 A method and system for stitching multispectral images of unmanned aerial vehicle remote sensing for water areas
CN116228539A (en)*2023-03-102023-06-06贵州师范大学 A method for stitching remote sensing images of drones
CN117522681A (en)*2023-10-272024-02-06季华实验室 Real-time splicing method and system for UAV thermal infrared remote sensing images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110310248A (en)*2019-08-272019-10-08成都数之联科技有限公司A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
CN116167937A (en)*2023-02-152023-05-26中国电子科技集团公司第五十四研究所Remote sensing image correction method based on two-dimensional code recognition

Also Published As

Publication numberPublication date
CN119228643A (en)2024-12-31

Similar Documents

PublicationPublication DateTitle
CN110689579B (en)Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN108228798B (en)Method and device for determining matching relation between point cloud data
WO2024002187A1 (en)Defect detection method, defect detection device, and storage medium
WO2022151658A1 (en)Defect detection method and apparatus, and computer device and computer-readable storage medium
CN109658454B (en)Pose information determination method, related device and storage medium
WO2016062159A1 (en)Image matching method and platform for testing of mobile phone applications
CN107038443B (en) A method and apparatus for locating a region of interest on a circuit board
CN117173225B (en)High-precision registration method for complex PCB
CN113989604A (en) A tire DOT information recognition method based on end-to-end deep learning
CN116402859A (en) A Moving Target Detection Method Based on Aerial Image Sequence
CN104484647B (en)A kind of high-resolution remote sensing image cloud height detection method
CN119228643B (en) Automatic mosaic method and device for unmanned aerial vehicle remote sensing images
CN112686962A (en)Indoor visual positioning method and device and electronic equipment
CN114913246B (en)Camera calibration method and device, electronic equipment and storage medium
CN117788444A (en)SMT patch offset detection method, SMT patch offset detection device and SMT patch offset detection system
CN118247211A (en)Two-dimensional code appearance defect detection method and two-dimensional code appearance defect detection device
CN117611652A (en)Grounding device mounting bolt size measurement method and system
CN111325106A (en)Method and device for generating training data
US20180313701A1 (en)Temperature map creation
CN116823791A (en)PIN defect detection method, device, equipment and computer readable storage medium
CN116433912A (en)Interactive image segmentation method, system and equipment for rice pests
CN116736272A (en)Method, device, equipment and medium for determining joint calibration parameters
CN113139454A (en)Road width extraction method and device based on single image
CN114202589A (en) A whole vehicle wood inspection system and method based on ArUco code
CN119360051B (en)Element extraction and map generation method and device based on zero sample deep learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp