Movatterモバイル変換


[0]ホーム

URL:


CN114554030B - Device detection system and device detection method - Google Patents

Device detection system and device detection method
Download PDF

Info

Publication number
CN114554030B
CN114554030BCN202011309798.XACN202011309798ACN114554030BCN 114554030 BCN114554030 BCN 114554030BCN 202011309798 ACN202011309798 ACN 202011309798ACN 114554030 BCN114554030 BCN 114554030B
Authority
CN
China
Prior art keywords
detection
positioning
image
pose
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011309798.XA
Other languages
Chinese (zh)
Other versions
CN114554030A (en
Inventor
黄金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Airbus Beijing Engineering Technology Center Co Ltd
Original Assignee
Airbus Beijing Engineering Technology Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airbus Beijing Engineering Technology Center Co LtdfiledCriticalAirbus Beijing Engineering Technology Center Co Ltd
Priority to CN202011309798.XApriorityCriticalpatent/CN114554030B/en
Publication of CN114554030ApublicationCriticalpatent/CN114554030A/en
Application grantedgrantedCritical
Publication of CN114554030BpublicationCriticalpatent/CN114554030B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to a device detection system and a device detection method. The device detection system includes: a mounting platform; the positioning sensing device comprises a positioning shooting device and a positioning measuring device; detecting a shooting device; and a processor configured to: and processing the positioning image shot by the positioning shooting device, the measured value of the positioning measuring device and the detection image shot by the detection shooting device so as to detect the object to be detected. And the processor determines the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameter of the object to be detected. According to the equipment detection system and the equipment detection method, the depth calculation of the positioning image is optimized, the dense depth map of the surrounding environment is constructed, the dense depth map is combined with the design parameters of the object to be detected, the pose estimation of the carrying platform is corrected, the pose drift is minimized, the positioning precision in the detection process is improved, the quality of the detection image is improved, and the detection accuracy is improved.

Description

Device detection system and device detection method
Technical Field
The present invention relates to a device detection system and a device detection method, and more particularly, to a device detection system and a device detection method for detecting a device using a mobile apparatus (e.g., a drone or a robot).
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
In the production, manufacturing and using processes of a plurality of industrial devices, various detection and maintenance are required for the devices. With the development of technology, the equipment detection technology based on unmanned aerial vehicles or robots is widely applied to various industrial fields. For example, in the field of aircraft, unmanned aerial vehicle or robot-based equipment inspection techniques are widely used for digital manufacturing, pre-flight inspection, maintenance, and repair of aircraft. The application of the equipment detection technology based on the unmanned aerial vehicle or the robot, especially the application in the facilities which are in bad detection environment and inconvenient for manual detection, obviously reduces the labor cost and improves the detection efficiency.
In drone or robot based device detection technologies, the accuracy of the navigational positioning of the drone or robot is crucial to the accuracy of the device detection performed. The higher the accuracy of the navigation fix, the higher the accuracy of the performed device detection. In order to further improve the accuracy of the equipment detection, the precision of the navigation and positioning of the unmanned aerial vehicle or the robot in the process of executing the equipment detection is to be further improved.
Disclosure of Invention
The invention aims to provide a device detection system, which aims to improve the positioning precision of a carrying platform of the device detection system in the process of carrying out device detection, improve the shooting quality of a detected image and further improve the accuracy of device detection. Another object of the present invention is to provide a method for detecting a device, so as to improve the positioning accuracy and the detection accuracy.
One aspect of the present invention provides a device detection system including: the carrying platform is suitable for moving along a detection motion path of the object to be detected; the positioning sensing device is arranged on the carrying platform and comprises a positioning shooting device and a positioning measuring device, the positioning shooting device is configured to shoot positioning images in the movement process of the carrying platform, and the positioning measuring device is configured to measure the movement of the carrying platform in real time; the detection shooting device is arranged on the carrying platform, can move relative to the carrying platform and is configured to shoot a detection image of the object to be detected; and a processor configured to: and processing the positioning image, the measured value of the positioning measuring device and the detection image so as to detect the object to be detected. And the processor determines the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameter of the object to be detected.
The processor includes a pose module including a current pose determination unit configured to: and constructing a dense depth map according to the positioning image and the measured value, correcting the dense depth map according to the design parameters of the object to be detected, and determining the pose of the carrying platform according to the corrected dense depth map. The detection module includes a detection image acquisition unit configured to acquire a detection image.
The current pose determination unit is configured to: performing depth estimation of the positioning image based on the positioning image and the measured value; calculating the confidence coefficient of the pixels of the continuous multiframe positioning images; constructing a spatial cost function of the positioning image; correcting the depth information of the positioning image according to the spatial cost function; and constructing the dense depth map based on the corrected depth information.
In one embodiment, the pose module further includes a first motion control unit that controls the motion of the mounting platform in real time according to the pose of the mounting platform so that the mounting platform enters the detection motion path and moves along the detection motion path.
The pose module further includes an object pose calculation unit configured to: and determining the target detection shooting pose of the detection shooting device according to the pose of the carrying platform, and sending the calculated target detection shooting pose to the detection module.
The detection module further comprises a second motion control unit. The processor is further configured to: the movement of the carrying platform is controlled by the first movement control unit, and/or the movement of the detection shooting device relative to the carrying platform is controlled by the second movement control unit, so that the pose of the detection shooting device is adjusted to the target detection shooting pose. The detection image acquisition unit is configured to acquire a detection image captured by the detection capture device at the object detection capture pose.
The current pose determination unit is configured to: and carrying out pixel level segmentation and identification on the positioning image by combining the design parameters of the object to be detected so as to segment a target image from the positioning image, wherein the target image is an image of the object to be detected or an image of a detection part of the object to be detected. The target pose calculation unit is configured to: and determining the depth information of the target image according to the corrected dense depth map, determining the detection position of the carrying platform according to the position and the depth information of the target image, and determining the current detection item and the corresponding target detection shooting position and the position of the current detection item.
The current pose determination unit is configured to: converting the target image into a three-dimensional space coordinate system pixel by pixel to form a three-dimensional point cloud; dividing the three-dimensional point cloud into a plurality of sub-point clouds according to the design parameters of the object to be detected; matching the design parameters of the points in each sub-point cloud with the corresponding points of the object to be detected, and correcting the points in each sub-point cloud by using the design parameters of the object to be detected according to the matching result; and correcting the dense depth map by using each corrected sub-point cloud.
The equipment detection system also comprises a memory, the memory stores design parameter information, detection item information, detection movement path information and defect judgment criteria of the object to be detected, and the processor is communicated with the memory in a wired mode or a wireless mode.
The equipment detection system also includes a ground controller configured to set a detection task, start a detection, and stop a detection.
The carrying platform is an unmanned aerial vehicle or a detection robot.
The object to be detected is located indoors or outdoors, and the device detection system detects the outer contour and/or the interior of the object to be detected.
Another aspect of the present invention is to provide an apparatus detecting method, including the steps of: appointing an object to be detected and setting a detection task; the carrying platform moves to a detection motion path of an object to be detected and moves along the detection motion path, wherein the carrying platform is provided with a positioning sensing device, the positioning sensing device comprises a positioning shooting device and a positioning measuring device, the positioning shooting device is configured to shoot a positioning image in the motion process of the carrying platform, and the positioning measuring device is configured to measure the motion of the carrying platform in real time; shooting a detection image of an object to be detected by using a detection shooting device, wherein the detection shooting device is arranged on the carrying platform and can move relative to the carrying platform; and processing the positioning image, the measured value of the positioning measuring device and the detection image so as to detect the object to be detected. The device detection method further comprises: and in the process of moving the carrying platform to the detection motion path of the object to be detected and moving along the detection motion path, determining the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameters of the object to be detected.
Determining the pose of the carrying platform comprises: and constructing a dense depth map according to the positioning image and the measured value, correcting the dense depth map according to the design parameters of the object to be detected, and determining the pose of the carrying platform according to the corrected dense depth map.
Constructing a dense depth map includes: based on the positioning image and the measured value, carrying out depth estimation of the positioning image; calculating the confidence coefficient of the pixels of the continuous multiframe positioning images; constructing a spatial cost function of the positioning image; correcting the depth information of the positioning image according to the spatial cost function; and constructing a dense depth map based on the corrected depth information.
The device detection method further comprises: and in the process of moving the carrying platform to the detection motion path of the object to be detected and moving along the detection motion path, controlling the motion of the carrying platform in real time according to the pose of the carrying platform so as to enable the carrying platform to move along the detection motion path.
The device detection method further comprises: and determining a target detection shooting pose of the detection shooting device, and adjusting the pose of the detection shooting device to the target detection shooting pose.
Determining a target detection shooting pose of the detection shooting device includes: performing pixel level segmentation identification on the positioning image by combining with the design parameters of the object to be detected so as to segment a target image from the positioning image, wherein the target image is an image of the object to be detected or an image of a detection part of the object to be detected; and determining the depth information of the target image according to the corrected dense depth map, determining the detection position of the carrying platform according to the position and the depth information of the target image, and determining the current detection item and the corresponding target detection shooting position and the position of the current detection item.
Correcting the dense depth map includes: converting the target image into a three-dimensional space coordinate system pixel by pixel to form a three-dimensional point cloud; dividing the three-dimensional point cloud into a plurality of sub-point clouds according to the design parameters of the object to be detected; matching the design parameters of the points in each sub-point cloud with the corresponding points of the object to be detected, and correcting the points in each sub-point cloud by using the design parameters of the object to be detected according to the matching result; and correcting the dense depth map by using each corrected sub-point cloud.
Adjusting the pose of the detection camera to the target detection shooting pose comprises: controlling the motion of the carrying platform; and/or controlling the movement of the detection shooting device relative to the carrying platform.
The detection of the object to be detected comprises the following steps: and analyzing the state of the object to be detected based on the detection image shot by the detection shooting device at the target detection shooting pose.
The carrying platform is an unmanned aerial vehicle or a detection robot.
The invention provides an improved equipment detection system and an equipment detection method. According to the equipment detection system and the equipment detection method, the continuous frame positioning images are processed to construct the dense depth map of the surrounding environment, the depth calculation of the pixel points of the positioning images is optimized, the precision of the depth information of the pixel points of the positioning images is improved, the dense depth map of the surrounding environment is combined with the digital design parameters of the detection object, the digital design parameters of the detection object are used for optimizing the dense depth map, the pose estimation of the carrying platform is corrected, more accurate synchronous composition and positioning based on vision are realized, the deviation caused by the inconsistency of the pixels among frames is avoided, the pose drift of the positioning sensing device is minimized, the positioning accuracy in the process of executing equipment detection does not need to be completely dependent on the quality of the shot environment images, the pose estimation can be corrected even under the condition that the shot images have noise due to the influence of the environment (such as illumination, shadow or shielding), the positioning accuracy is improved, a more reliable basis is provided for subsequent positioning control and shooting pose control, the quality of the detection images can be improved, and the accuracy of detection analysis is improved.
Drawings
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings. In the drawings, like features or components are designated with like reference numerals, and the drawings are not necessarily drawn to scale, and wherein:
FIG. 1 shows a schematic block diagram of a device detection system according to an embodiment of the present invention;
FIG. 2 shows a flow chart of a device detection method of the device detection system according to the invention; and
FIG. 3 shows a flow chart of a device detection system determining a current pose of a ride-on platform according to the present invention.
Detailed Description
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, like reference numerals indicate like or similar parts and features. The drawings are only schematic representations, not necessarily of specific dimensions, of particular proportions of various embodiments of the invention, illustrating the concepts and principles of the embodiments of the invention. Certain features that are part of a particular figure may be used in an exaggerated manner to illustrate relevant details or structures of embodiments of the present invention.
Fig. 1 shows a schematic block diagram of a device detection system 1 according to the present invention. As shown in fig. 1, the equipment detection system 1 includes aground control device 10, amounting platform 20, apositioning sensing device 30, adetection camera 40, aprocessor 50, and amemory 60.
Theground control device 10 is a control device of the equipment inspection system 1, and may be a hand-held controller operated by an operator. Theground control device 10 is configured to start or stop the movement of themounting platform 20 to start or stop the detection of the detection object. The detection object can be various industrial devices, such as airplanes, vehicles, electric towers and the like, and can also be certain specific spaces or facilities, such as workshops or factories. Theground control device 10 is provided with a user input interface that may include a detection task setting interface 11, astart detection interface 13, and astop detection interface 15. An operator can input the type of the object to be detected, the position of the object to be detected, and the portion of the object to be detected through the detection task setting interface 11 of theground control device 10 to set a detection task, and then, start the movement of thecarrying platform 20 by operating thestart detection interface 13 to start the detection of the object to be detected. During the movement of themounting platform 20 to perform the inspection, the operator can terminate the inspection by operating thestop inspection interface 15 of theground control apparatus 10 at any time.
Themounting platform 20 is a mobile device that can move around the outer contour of the inspection object or inside the inspection object to perform inspection, and may be, for example, an unmanned aerial vehicle or a robot for inspection. The mountingplatform 20 may mount a plurality of devices of the device inspection system 1, for example, thepositioning sensing device 30, theinspection photographing device 40, theprocessor 50, and thememory 60 may be disposed on the mountingplatform 20. The mountingplatform 20 moves according to the command of theground control device 10 to perform the detection.
Thepositioning sensing device 30 is disposed on the mountingplatform 20, and is configured to photograph a motion environment during a motion process of the mountingplatform 20, and measure a motion of the mountingplatform 20, so as to implement synchronous mapping and positioning of the motion environment of the mountingplatform 20 based on vision. Thepositioning sensing device 30 includes apositioning photographing device 31 and apositioning measuring device 33. Thepositioning imaging device 31 is mounted on the mountingplatform 20. In the present example, thepositioning camera 31 is installed to have a fixed posture with respect to the mountingplatform 20, however, the present invention is not limited thereto. In other examples according to the present invention, thepositioning camera 31 may also be installed such that the relative posture between the positioningcamera 31 and the mountingplatform 20 is adjustable. For example, in one example, thepositioning camera 31 may be mounted on the mountingplatform 20 via a motion-controllable pan/tilt head, and may be capable of moving relative to the mountingplatform 20, such as horizontal movement and vertical pitch movement relative to the mountingplatform 20, to adjust the shooting orientation and pitch angle. Thepositioning camera 31 is configured to take images of the moving environment in real time during the movement of the mountingplatform 20 in order to determine and correct the moving path of the mountingplatform 20 and the posture thereof. Thepositioning camera 31 may comprise one or more first cameras to capture images of the environment in real time during the movement of the mountingplatform 20. The first camera may be a high speed camera capable of taking RGB images of, for example, 60 to 200 frames per second. The positioning and measuringdevice 33 is disposed on the carryingplatform 20, and is used for sensing spatial six-degree-of-freedom motion of the carryingplatform 20 in real time, so as to determine the pose of the carryingplatform 20 and each device carried thereby. In the present embodiment, thepositioning measurement device 33 includes an Inertial Measurement Unit (IMU) which may provide six-degree-of-freedom measurements, may include three single-axis accelerometers and three single-axis gyroscopes, and is used to detect the acceleration and angular velocity of the mountingplatform 20 in three-dimensional space, to solve the spatial six-degree-of-freedom motion of the mountingplatform 20, i.e., the movement along and rotation about three orthogonal axes of a spatial three-axis coordinate system, to determine the attitude of the mountingplatform 20, and thus to correct the path of motion of the mountingplatform 20. In other examples, the inertial measurement unit of thepositioning measurement device 33 may provide measurements in nine degrees of freedom, for example, three single axis magnetic sensors may be included in addition to accelerometers and gyros to provide heading information.
Thedetection camera 40 may be mounted on the mountingplatform 20 via a motion-controllable pan/tilt head, for example, and may be movable relative to the mountingplatform 20, such as horizontal movement and vertical pitch movement relative to the mountingplatform 20, so as to adjust a shooting orientation and a pitch angle. Thedetection camera 40 is configured to take a whole image of the detection object or an image of a certain portion of the detection object when the mountingplatform 20 moves along the detection movement path of the detection object, for detection and analysis of the detection object. The detection motion path of the detection object may be a path outside the detection object around the outer contour of the detection object or a path inside the detection object. Thedetection photographing device 40 may include one or more second cameras for photographing the detection object or a local part thereof when the mountingplatform 20 moves along the detection motion path of the detection object, for detection analysis of the detection object. The second camera may be a low speed camera that can take RGB images, for example, 3 to 10 frames per second. Herein, for convenience of description, RGB images taken by thepositioning camera 31 and thedetection camera 40 are collectively referred to as an environment image, where the RGB image taken by thepositioning camera 31 is defined as a positioning image, the RGB image taken by thedetection camera 40 is defined as a detection image, and a shooting pose of thepositioning camera 31 is defined as a positioning shooting pose, and a shooting pose of thedetection camera 40 is defined as a detection shooting pose.
Theprocessor 50 is configured to communicate with theground control device 10, thepositioning sensing device 30, thedetection camera 40, and thememory 60 in a wired or wireless manner, and perform various processes and calculations. For example, theprocessor 50 may acquire a detection task from theground control device 10, acquire a corresponding detection motion path from thememory 60 according to the detection task, process and analyze images captured by thepositioning camera 31 and thedetection camera 40, receive motion information sensed by thepositioning measurement device 33, calculate the poses of the mountingplatform 20 and the mountedpositioning camera 31 and thedetection camera 40, and perform detection and analysis of the detection object. In the present embodiment, theprocessor 50 is provided on the mountingplatform 20. Alternatively, theprocessor 50 may be provided on thesurface control device 10 in other embodiments according to the invention.
Theprocessor 50 mainly includes apose module 51 and a detection module 55. Thepose module 51 includes a currentpose determination unit 511, a firstmotion control unit 513, and an objectpose calculation unit 515. The currentpose determination unit 511 is configured to receive and process the positioning image captured by thepositioning camera 31, the sensing signal of thepositioning measurement device 33, and process the positioning image and the sensing signal to acquire the current pose of theembarkation platform 20 for calculating the target detection shooting pose (optimal shooting pose) of thedetection camera 40. The attitude of the mountingplatform 20 refers to the position and attitude of the mountingplatform 20 in the spatial three-axis coordinate system. The currentpose determination unit 511 includes a plurality of sub-units, such as, but not limited to, animage segmentation unit 5111, adepth calculation unit 5113, and a currentpose correction unit 5115.
Theimage segmentation unit 5111 is configured to perform segmentation recognition on the positioning image captured by thepositioning capture device 31 to extract a target image in the positioning image, which may be, for example, an image of the detection object or an image of a detection portion of the detection object. Theimage segmentation unit 5111 is configured to perform pixel-level segmentation recognition on a positioning image (an image of a motion environment) including a detection object captured by thepositioning capture device 31 during movement of the mountingplatform 20 toward the detection object and along a detection motion path of the detection object, determine a pixel position of the detection object in the positioning image, and extract an image of the detection object from the positioning image for calculating a relative position between the mountingplatform 20 and the detection object, so as to correct the motion of the mountingplatform 20 to enable the mountingplatform 20 to move along the detection motion path of the detection object. In addition, theimage segmentation unit 5111 is further configured to perform pixel-level segmentation recognition on the positioning image captured by thepositioning capture device 31 during the movement of the mountingplatform 20 along the detection movement path of the detection object, determine the pixel position of the detection part of the detection object in the positioning image, extract an image of the detection part of the detection object (for example, the nose or the wing of the airplane) from the positioning image, and allow the object posecalculation unit 515 to calculate the object detection capture pose that thedetection capture device 40 should take when capturing the detection image of the detection part.
Thedepth calculating unit 5113 is configured to analyze the two-dimensional positioning image captured by thepositioning imaging device 31, and acquire depth information of each pixel point in the positioning image. The depth information of each pixel in the positioning image is the distance between the shooting point of the image (i.e., the positioning shooting device 31) and the spatial position corresponding to each pixel. Thedepth calculating unit 5113 is further configured to calculate the confidence of the pixel points of the positioning image by comparing the continuous multi-frame positioning image with the corresponding key frame, construct a spatial cost function of the positioning image, correct the depth information of the positioning image based on the spatial cost function, and construct a dense depth map of the surrounding environment, thereby obtaining the three-dimensional data of the positioning image.
The currentpose correction unit 5115 is configured to correct the dense depth map based on the surrounding environment constructed by thedepth calculation unit 5113 according to the design parameters of the detection object, and correct the rough pose estimation of the mountingplatform 20 and the positioning imaging device 31 (for example, the rough pose estimation based on the measurement result of the positioning measurement device 33) according to the corrected dense depth map, thereby acquiring the current poses of the mountingplatform 20 and thepositioning imaging device 31. The pose of thepositioning camera 31 is determined based on the pose of the mountingplatform 20 and its relative position to the mounting platform 20 (including the mounting position, azimuth, pitch angle, etc. with respect to the mounting platform 20).
The firstmotion control unit 513 is configured to calibrate the motion of the mountingplatform 20 so that the mountingplatform 20 enters the detection motion path of the detection object and moves along the detection motion path, based on the position of the detection object and the current pose of the mountingplatform 20 calculated by the currentpose correction unit 5115.
The target posecalculation unit 515 is configured to determine the current pose of thedetection camera 40, determine the detection position where the mountingplatform 20 is currently located, and calculate the target detection shooting pose that thedetection camera 40 for shooting the detection image of the detection object should take, based on the relative position of thedetection camera 40 to the mounting platform 20 (including the installation position, the azimuth angle, the pitch angle, and the like with respect to the mounting platform 20) and the current poses of the mountingplatform 20 and thepositioning camera 31 calculated by the currentpose correction unit 5115, and output the calculated target detection shooting pose to the detection module 55 in real time.
The inspection module 55 is configured to acquire an inspection image of the inspection object photographed by theinspection photographing device 40 and analyze a state of the inspection object according to the acquired inspection image to determine whether a defect exists and to determine a location where the defect exists and a type of the defect. The detection module 55 includes a second motion control unit 551, a detectedimage acquisition unit 553, and ananalysis unit 555. The second motion control unit 551 is configured to receive the current pose of themount platform 20 from the currentpose determination unit 511 of thepose module 51 and the goal detection photographing pose from the goal posecalculation unit 515, and to control the motion of themount platform 20 and/or the motion of thedetection photographing device 40 relative to themount platform 20 to place thedetection photographing device 40 in the goal detection photographing pose according to the current pose of themount platform 20, the calculated goal detection photographing pose, and the relative position of thedetection photographing device 40 and themount platform 20, to improve the quality of the detection image photographed by thedetection photographing device 40. The inspectionimage acquisition unit 553 is configured to acquire an inspection image of the inspection object photographed by theinspection photographing device 40 in the object inspection photographing pose. Theanalysis unit 555 is configured to process and analyze the inspection image acquired by the inspectionimage acquisition unit 553 to determine whether the inspection object has a defect and the location and type of the defect. Preferably, theanalysis unit 555 is configured to process and analyze the detection image acquired by the detectionimage acquisition unit 553 after the mountingplatform 20 completes the shooting of the detection object and returns after moving around the detection object, without performing analysis and processing in real time during the movement of the mountingplatform 20, so that the real-time calculation amount of theprocessor 50 can be reduced, and the configuration requirement of theprocessor 50 can be reduced.
Thememory 60 is configured to store various information, algorithms, and the like, including: the type of the detection object, the design parameters of each type of detection object, the detection motion path information corresponding to each type of detection object, the detection item information corresponding to each detection part of the detection object, the defect judgment criteria corresponding to each detection item, and the like. The design parameters of each inspection object include design parameters of each part of the inspection object, assembly parameters between parts, and overall assembly parameters of the inspection object. Design parameters of a test object can be based on digitized prototype model (DMU) data of the test object. The detection movement path information is information related to a detection movement path preset for each detection object, and includes a detection start point, a detection movement path to be followed, and a safety distance, which is a minimum distance that the mountingplatform 20 should keep with the detection object during movement. During the process of the mountingplatform 20 moving along the detection motion path of the detection object for detection, the motion of the mountingplatform 20 may be corrected to follow the detection motion path of the detection object according to the safety distance. The detection item information of each detection site of the detection object is detection item information corresponding to a service manual of each detection site of the detection object. For example, when the detection site is a wing of an airplane, the detection items of the detection site include wing tip detection, wing edge detection and wing airfoil detection. The defect judgment criterion is a defect judgment criterion corresponding to each inspection item in the repair manual of the inspection object. In the process of detecting the detection object using the device detection system 1, the information stored in thememory 60, the algorithm, and the like are accessed and called by theprocessor 50 in a wired or wireless manner to perform corresponding processing.
A schematic block diagram of a device detection system 1 is introduced above. The equipment inspection system 1 according to the present invention processes the environmental image captured by thepositioning camera 31 during the movement of the mountingplatform 20 and the measurement result of thepositioning measurement device 33 to achieve the synchronous mapping and positioning of the mountingplatform 20, and based on the design parameters of the inspection object (e.g., digitized prototype data of the inspection object), revises the pose estimation of the mounting platform and thereby revises the movement of the mountingplatform 20, and revises the inspection shooting pose of theinspection camera 40 when shooting the inspection image of the inspection object, and reduces the pose drift, so that the shot inspection image has more accurate pose information, and more accurate inspection analysis is achieved.
The following will describe a detection method for detecting an aircraft using the device detection system 1 according to the present invention with an aircraft as a detection object, with reference to the drawings. Fig. 2 shows a flowchart of the detection method, and fig. 3 shows a detailed flowchart of the step of determining the current pose of the mounting platform in fig. 2.
As shown in fig. 2, when the aircraft is detected by using the device detection system 1, first, in step S101, a detection task is set and detection is started. A user sets a detection task through a user input interface of theground control device 10, inputs a detection object to be detected as an airplane in the detection task setting interface 11, inputs a position where the airplane to be detected is located, and specifies a part of the airplane to be detected, for example, a nose, a fuselage, a wing, etc. of the airplane to be detected. Then, the start item on thestart detection interface 13 is operated. Once the detection task has been set and the start item is operated, in step S102, the equipment detecting system 1 acquires design data of the aircraft (i.e., the detection object) and detection motion path information for the aircraft, including a detection start point, a detection motion path to be followed by the mountingplatform 20, and a safety distance to be maintained between the mountingplatform 20 and the aircraft, from thememory 60 according to the set detection task, then activates a driver of the mountingplatform 20, moves the mountingplatform 20 toward the aircraft, activates thepositioning photographing device 31 of thepositioning sensing device 30 to photograph a positioning image, and activates thepositioning measuring device 33 to measure the motion of the mountingplatform 20.
During the movement of the mountingplatform 20 toward the aircraft, the currentpose determination unit 511 of theprocessor 50 determines the current pose of the mountingplatform 20 in real time at step S103. The current pose of the mountingplatform 20 is used to calculate the current pose of the devices (e.g., thepositioning camera 31, the detection camera 40) mounted on the mountingplatform 20 in the subsequent steps. For example, the current posture of the apparatus mounted on the mountingplatform 20 may be determined according to the relative position between the apparatus and the mounting platform 20 (including the mounting position of the apparatus on the mountingplatform 20 and the pitch angle, azimuth angle, etc. with respect to the mounting platform 20) and the current posture of the mountingplatform 20.
In step S104, the movement of the mountingplatform 20 is corrected so that the mountingplatform 20 enters the detection movement path of the aircraft and moves along the detection movement path, according to the current posture of the mountingplatform 20 determined in step S103. The detection movement path of the aircraft is a path along which the mountingplatform 20 is moved to perform detection of the aircraft. Along the detection movement path, the mountingplatform 20 and the aircraft approach each other and maintain a suitable distance therebetween, for example, a safe distance therebetween. The firstmotion control unit 513 of thepose module 51 acquires the relative position between the mountingplatform 20 and the aircraft according to the current pose of the mountingplatform 20 and the position where the aircraft is located, and controls the motion of the mountingplatform 20 according to the detected motion path information for the aircraft, so that the distance between the mountingplatform 20 and the detection object is kept as a safe distance, and the mountingplatform 20 enters the detected motion path of the detection object and moves to the detection starting point.
Once the mountingplatform 20 enters the detection motion path of the aircraft and moves to the detection starting point, in step S105, theprocessor 50 processes the positioning image captured by thepositioning capture device 31 during the movement of the mountingplatform 20 along the detection motion path, performs pixel-level segmentation recognition on the positioning image, and determines the detection position where the mountingplatform 20 is currently located. First, theimage segmentation unit 5111 determines the pixel position of the airplane (i.e., the detection object) in the positioning image, performs pixel-level segmentation recognition on the positioning image including the airplane, and separates the airplane image in the positioning image from the environment image. Next, based on design data of the aircraft (e.g., digitized sample model data of the aircraft), theimage segmentation unit 5111 proceeds with pixel-level image segmentation of the aircraft image, dividing the aircraft image into a plurality of parts, e.g., a nose, a wing, a fuselage, a tail, etc. Then, based on the pixel-level segmentation recognition of the aircraft image and the depth of each pixel point, the target posecalculation unit 515 determines which detection position of the aircraft the mountingplatform 20 is currently at, and determines the current detection item.
In step S106, the object posecalculation unit 515 of theprocessor 50 calculates and outputs the object detection shooting pose in real time. The object posecalculation unit 515 acquires the detection item corresponding to the currently detected part and the corresponding design parameter information from thememory 60, and calculates in real time the object detection shooting pose that thedetection shooting device 40 corresponding to the detection item of the detected part needs to take based on the relative position between the mountingplatform 20 and the currently detected part, and outputs the calculated object detection shooting pose to the detection module 55. For example, when the mountingplatform 20 is currently at the detection position corresponding to the wing, the target posecalculation unit 515 acquires the detection items of the wing, and calculates in real time the target detection shooting pose that needs to be taken by thedetection shooting device 40 at the time of detecting each detection item of the wing based on the relative position between the mountingplatform 20 and the wing, and outputs the target detection shooting pose to the detection module 55 in real time.
In step S107, the pose of thedetection camera 40 is adjusted based on the calculated object detection camera pose. The second motion control unit 551 of the detection module 55 receives the object detection shooting pose calculated by the object posecalculation unit 515 from thepose module 51, and controls the motion of the mountingplatform 20 in real time and the motion of thedetection shooting device 40 relative to the mountingplatform 20, for example, the pan-tilt of thedetection shooting device 40, based on the object detection shooting pose, the current pose of the mountingplatform 20 obtained in step S103, and the relative position of thedetection shooting device 40 and the mountingplatform 20, so that thedetection shooting device 40 is at the object detection shooting pose of the corresponding detection item of the detection part. For example, for the detection of a wing, the target shooting pose of the detected image of the wing tip is different from the target shooting pose of the detected image of the wing edge. When a detection image of the wing edge is taken after the detection image of the wing tip of the wing is taken by thedetection taking device 40, the detection taking pose of thedetection taking device 40 is adjusted from the target detection taking pose taken when the wing tip is taken to the target detection taking pose when the wing edge is taken.
In step S108, the detectionimage acquisition unit 553 of the detection module 55 acquires a detection image that thedetection camera 40 captures at the object detection capturing pose. For the detection of the wing, for example, the detectionimage acquiring unit 553 acquires the detection image of the edge of the wing and the detection image of the tip of the wing, which are photographed by thedetection photographing unit 40 in the corresponding object detection photographing poses, respectively. Next, in step S109, it is determined whether the detection task has been completed. If the detection task is completed and it is not necessary to continue the detection of another detection site of the aircraft (i.e., the detection target), the mountingplatform 20 is returned to step S110, and the detection is completed. If the detection task is not completed and the next detection part needs to be detected, the process returns to step S103, and the following steps are repeated until all the detections are completed. After the mountingplatform 20 returns, the analyzingunit 555 of the inspection module 55 performs defect analysis based on the inspection image acquired by the inspectionimage acquiring unit 553. Theanalysis unit 555 processes and analyzes the inspection image captured by theinspection camera 40 in the object inspection shooting pose, compares information of the inspection image with the digitized design data of the inspection object (e.g., digitized prototype data of the inspection object) stored in thememory 60, and determines whether there is a defect at the inspection portion of the inspection object and the position and type of the defect based on the service manual of the inspection object.
Fig. 3 shows a detailed flow of the step S103, which illustrates a method for determining the current pose of the embarkation platform according to the embodiment of the invention.
As shown in fig. 3, first, in step S1031, theprocessor 50 acquires the positioning image captured by thepositioning imaging device 31 and the motion information of the mountingplatform 20 measured by the positioning measurement device 33 (e.g., an inertial measurement unit). Then, in step S1032, based on the received positioning image and motion information, depth estimation of the positioning image is performed. Thedepth calculating unit 5113 calculates a distance between the positioningcamera 31 and a spatial position corresponding to each pixel point of the positioning image, and uses the calculated distance as a rough depth value of each pixel point of the positioning image. The depth of the pixel points of the positioning image may be calculated based on a principle of stereoscopic vision, for example, based on a principle of stereoscopic vision of monocular vision or binocular vision (e.g., binocular vision).
In step S1033, theprocessor 50 calculates a matching confidence for each pixel of the consecutive frames of the positioning image. Thedepth calculation unit 5113 constructs a depth range including a depth value according to a rough depth value of each pixel point of the positioning image, constructs a three-dimensional space with the depth direction as one coordinate axis direction, divides the three-dimensional space into N parts by a depth label along the depth direction, and extracts information included in each neighborhood space, such as a shooting angle, a position, a pixel shift, and the like, respectively. Continuous multiframe positioning images in the shooting sequence of thepositioning shooting device 31 are respectively projected onto related key frame positioning images, and the matching reliability is calculated according to the consistency of the same pixel in the continuous frame positioning images. For example, when the positioningimage capturing device 31 captures the positioning image, the captured positioning image is affected by the environment (e.g., light, shadow, or shading), which causes noise to exist in the captured positioning image, and the same pixel point is inconsistent in the continuous multi-frame positioning images, which results in deviation. The higher the consistency, the higher the matching confidence. In positioning images of a shooting sequence of thepositioning camera 31, the key frame positioning image needs to satisfy at least one of the following conditions: 1) The pose angle change between successive frame alignment images exceeds a predetermined angle change threshold, which in one example may be 5 degrees; 2) Spatial displacement variations in the successive frame alignment images exceed a predetermined displacement threshold, which in one example is 0.1m; and 3) the pixel change ratio during affine transformation of the images of the successive frame alignment images is less than an effective pixel ratio threshold, which in one example is 0.5.
In step S1034, a spatial cost function of the positioning image is constructed through image interpolation operation based on the matching confidence of each pixel of the positioning image calculated in step S1033.
Then, in step S1035, thedepth calculation unit 5113 corrects the coarse depth value of the positioning image, calculates a depth correction value of the positioning image, and corrects the depth information of the positioning image, based on the spatial cost function of the positioning image constructed in step S1034. In step S1036, based on the corrected depth information, thedepth calculation unit 5113 constructs a dense depth map of the surrounding environment, and implements three-dimensional construction of a two-dimensional positioning image.
Next, in step S1037, based on the dense depth map of the surrounding environment and according to the design data of the aircraft (for example, the digitized design model data of the aircraft) called from thememory 60, the currentpose correction unit 5115 corrects the pose estimation of the mountingplatform 20, and acquires the current pose of the mountingplatform 20, thereby achieving the vision-based synchronized composition and positioning of the mountingplatform 20. Specifically, a target image (an image of an airplane or an image of some detection portions of the airplane) identified by pixel-level segmentation from a positioning image is converted into a three-dimensional space coordinate system pixel by pixel to form a three-dimensional point cloud. The three-dimensional point clouds are then divided into a plurality of groups according to design parameters of the aircraft (e.g., DMU model data of the aircraft), forming a plurality of sub-point clouds. For example, a three-dimensional point cloud is divided into a plurality of sub-point clouds by a clustering algorithm or the like according to the appearance parameter relationship of each point of the same part of the airplane. And then, matching the points in each sub-point cloud with the design parameters of the corresponding points of the airplane, and correcting the points in each sub-point cloud by using the design parameters of the airplane according to the matching result. And then, optimizing the dense depth map by using each corrected sub-point cloud to obtain a corrected dense depth map. Through the processing, the dense depth map is corrected by utilizing the design data of the airplane, so that the visual-based more accurate synchronous composition and positioning of the carryingplatform 20 are realized, and the more accurate current pose of the carryingplatform 20 is obtained, thereby improving the positioning accuracy of the carryingplatform 20 and the detection shooting device carried by the carryingplatform 20, improving the quality of the detection image and improving the accuracy of detection analysis.
The device detection system 1 according to the present invention processes the continuous frame positioning images, constructs a spatial cost function of the positioning images based on the confidence of the pixels of the positioning images, and modifies the depth information of the positioning images based on the spatial cost function of the positioning images to construct a dense depth map of the surrounding environment, thereby optimizing the depth calculation of the positioning images of the shooting sequence of thepositioning shooting device 31 and improving the accuracy of the depth calculation of each pixel of the positioning images. Moreover, the device inspection system 1 combines the dense depth map of the surrounding environment with the digital design prototype data of the inspection object, further corrects the constructed dense depth map according to the digital design prototype data of the inspection object, thereby correcting the pose estimation of the mountingplatform 20, realizing more accurate synchronous composition and positioning based on vision, avoiding deviation caused by inconsistency of pixels between frames, minimizing the pose drift of the photographing apparatus, and making the positioning accuracy in the process of performing device inspection not completely dependent on the quality of the photographed environmental image, thereby being capable of correcting the pose estimation, improving the positioning accuracy, and providing a more reliable basis for subsequent positioning control, photographing pose control and defect analysis even under the condition that the photographed image has noise due to environmental influences (such as illumination, shadow or shielding). Therefore, the device detection system 1 and the device detection method thereof according to the present invention are applicable regardless of whether the detection object is located in an indoor environment or an outdoor environment, and regardless of whether the outer contour of the detection object is detected or the inside of the detection object is detected.
The device inspection system 1 according to the present invention is described above with reference to the drawings, and the device inspection method of the device inspection system 1 according to the present invention is described with an example in which an airplane is an inspection object. However, the above examples should not be taken as limiting the device detection system according to the present invention. The device detection system according to the invention can also be applied to the detection of other devices.
Herein, exemplary embodiments of the present invention have been described in detail, but it should be understood that the present invention is not limited to the specific embodiments described and illustrated in detail above. Various modifications and alterations of this invention will become apparent to those skilled in the art without departing from the spirit and scope of this invention. All such variations and modifications are intended to be within the scope of the present invention. Moreover, all the components described herein may be replaced by other technically equivalent components.

Claims (16)

CN202011309798.XA2020-11-202020-11-20Device detection system and device detection methodActiveCN114554030B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011309798.XACN114554030B (en)2020-11-202020-11-20Device detection system and device detection method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011309798.XACN114554030B (en)2020-11-202020-11-20Device detection system and device detection method

Publications (2)

Publication NumberPublication Date
CN114554030A CN114554030A (en)2022-05-27
CN114554030Btrue CN114554030B (en)2023-04-07

Family

ID=81660470

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011309798.XAActiveCN114554030B (en)2020-11-202020-11-20Device detection system and device detection method

Country Status (1)

CountryLink
CN (1)CN114554030B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115147411B (en)*2022-08-302022-11-22启东赢维数据信息科技有限公司Labeler intelligent positioning method based on artificial intelligence
CN115620337B (en)*2022-10-112024-08-30深圳市谷奇创新科技有限公司Optical fiber sensor monitoring method and system for vital signs

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105388905A (en)*2015-10-302016-03-09深圳一电航空技术有限公司Unmanned aerial vehicle flight control method and device
CN108416840A (en)*2018-03-142018-08-17大连理工大学 A Dense Reconstruction Method of 3D Scene Based on Monocular Camera
CN109341694A (en)*2018-11-122019-02-15哈尔滨理工大学 An autonomous positioning and navigation method for a mobile detection robot
CN109596118A (en)*2018-11-222019-04-09亮风台(上海)信息科技有限公司It is a kind of for obtaining the method and apparatus of the spatial positional information of target object
CN110806411A (en)*2019-11-072020-02-18武汉理工大学Unmanned aerial vehicle rail detecting system based on line structure light
CN111077907A (en)*2019-12-302020-04-28哈尔滨理工大学Autonomous positioning method of outdoor unmanned aerial vehicle
CN111354043A (en)*2020-02-212020-06-30集美大学 A three-dimensional attitude estimation method and device based on multi-sensor fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB2506338A (en)*2012-07-302014-04-02Sony Comp Entertainment EuropeA method of localisation and mapping

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105388905A (en)*2015-10-302016-03-09深圳一电航空技术有限公司Unmanned aerial vehicle flight control method and device
CN108416840A (en)*2018-03-142018-08-17大连理工大学 A Dense Reconstruction Method of 3D Scene Based on Monocular Camera
CN109341694A (en)*2018-11-122019-02-15哈尔滨理工大学 An autonomous positioning and navigation method for a mobile detection robot
CN109596118A (en)*2018-11-222019-04-09亮风台(上海)信息科技有限公司It is a kind of for obtaining the method and apparatus of the spatial positional information of target object
CN110806411A (en)*2019-11-072020-02-18武汉理工大学Unmanned aerial vehicle rail detecting system based on line structure light
CN111077907A (en)*2019-12-302020-04-28哈尔滨理工大学Autonomous positioning method of outdoor unmanned aerial vehicle
CN111354043A (en)*2020-02-212020-06-30集美大学 A three-dimensional attitude estimation method and device based on multi-sensor fusion

Also Published As

Publication numberPublication date
CN114554030A (en)2022-05-27

Similar Documents

PublicationPublication DateTitle
CN108171733B (en)Method of registering two or more three-dimensional 3D point clouds
CN112567201B (en)Distance measuring method and device
CN108399642B (en)General target following method and system fusing rotor unmanned aerial vehicle IMU data
US9013576B2 (en)Aerial photograph image pickup method and aerial photograph image pickup apparatus
JP3833786B2 (en) 3D self-position recognition device for moving objects
CN103020952B (en)Messaging device and information processing method
JPWO2018143263A1 (en) Imaging control apparatus, imaging control method, and program
WO2018027339A1 (en)Copyright notice
CN114554030B (en)Device detection system and device detection method
CN110720113A (en)Parameter processing method and device, camera equipment and aircraft
JP2018009918A (en) Self-position detection device, mobile device, and self-position detection method
CN115950435A (en)Real-time positioning method for unmanned aerial vehicle inspection image
CN120339396A (en) A positioning method and system for a tunnel boring machine
CN118115582A (en)Method and system for identifying pose of crane hook of monocular camera tower based on YOLO
CN111103608A (en)Positioning device and method used in forestry surveying work
JP5267100B2 (en) Motion estimation apparatus and program
CN208314856U (en)A kind of system for the detection of monocular airborne target
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
CN114003041A (en) A multi-unmanned vehicle collaborative detection system
JP7437930B2 (en) Mobile objects and imaging systems
CN114648577B (en) Equipment detection method and equipment detection system
JP7317684B2 (en) Mobile object, information processing device, and imaging system
Hsia et al.Height estimation via stereo vision system for unmanned helicopter autonomous landing
CN113566778A (en)Multipoint perspective imaging unmanned aerial vehicle ground flight pose measurement method
CN113011212A (en)Image recognition method and device and vehicle

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp