Movatterモバイル変換


[0]ホーム

URL:


CN112714287B - Cloud deck target conversion control method, device, equipment and storage medium - Google Patents

Cloud deck target conversion control method, device, equipment and storage medium
Download PDF

Info

Publication number
CN112714287B
CN112714287BCN202011543459.8ACN202011543459ACN112714287BCN 112714287 BCN112714287 BCN 112714287BCN 202011543459 ACN202011543459 ACN 202011543459ACN 112714287 BCN112714287 BCN 112714287B
Authority
CN
China
Prior art keywords
target object
target
distance
head equipment
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011543459.8A
Other languages
Chinese (zh)
Other versions
CN112714287A (en
Inventor
李培俊
李方
陈曦
付守海
周伟亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Keystar Intelligence Robot Co ltd
Original Assignee
Guangdong Keystar Intelligence Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Keystar Intelligence Robot Co ltdfiledCriticalGuangdong Keystar Intelligence Robot Co ltd
Priority to CN202011543459.8ApriorityCriticalpatent/CN112714287B/en
Publication of CN112714287ApublicationCriticalpatent/CN112714287A/en
Priority to PCT/CN2021/099420prioritypatent/WO2022134490A1/en
Application grantedgrantedCritical
Publication of CN112714287BpublicationCriticalpatent/CN112714287B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention relates to a method, a device, equipment and a storage medium for controlling the conversion of a cradle head target, wherein the method comprises the steps of obtaining a first image of a target object shot by cradle head equipment under a first multiple; determining the position of the target object in the first image as the initial position of the target object in the shooting area of the cradle head equipment; selecting a triangle model matched with the actual distance between the cradle head equipment and the target object, and calculating rotation parameters required by the cradle head equipment when the target object moves from an initial position to a target position of a shooting area; the cradle head equipment is controlled to rotate according to the rotation parameters, so that the cradle head equipment rotates to the target position of the target object in the shooting area, a second image of the target object is obtained through shooting by a second multiple, the rapidness and the accuracy of determining the rotation parameters of the cradle head equipment are realized, the cradle head target conversion control is more stable and smooth, and the technical effect of control time is saved.

Description

Cloud deck target conversion control method, device, equipment and storage medium
Technical Field
The present invention relates to the field of detection robots, and in particular, to a method, an apparatus, a device, and a storage medium for controlling target conversion of a pan/tilt.
Background
Generally, a high-voltage transmission line is installed in the open air, and regular inspection maintenance is required. The obstacle surmounting robot can surmount obstacles such as a high-voltage tower or a line hardware fitting, and the like, and can travel on the high-voltage transmission line, and detect hidden trouble defects of the high-voltage transmission line and the high-voltage tower in the high-voltage transmission line.
In the prior art, an upper computer can be used for sending a control instruction to the obstacle surmounting robot in the background so as to control the obstacle surmounting robot to travel on a high-voltage transmission line and control a cradle head on the obstacle surmounting robot to record a field picture. And analyzing hidden danger defect conditions of the high-voltage transmission line and the like by using an analysis personnel at the background according to the field picture.
Further, when the cradle head on the obstacle surmounting robot is controlled to record the field picture, a control instruction can be sent to the cradle head so as to control the cradle head to execute operations such as adjusting the rotation angle, shooting multiple, shooting focal length and the like.
It should be noted that when the pan-tilt is manually controlled to move to obtain an image of the target area, because under the condition of high multiple of the camera, even if the position of the camera is slightly adjusted, large-scale shake of the image capturing area is easily caused, the control is unstable, and thus the problem that the target area cannot be quickly and manually positioned to capture is caused.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and a storage medium for controlling the conversion of a tripod head target, which realize the rapidity and the accuracy of determining the rotation parameters of tripod head equipment, so that the conversion control of the tripod head target is more stable and smoother, and the technical effect of saving the control time is achieved.
In order to achieve the above object, a first aspect of the present application provides a method for controlling target conversion of a pan/tilt head, including:
acquiring a first image shot by the cradle head equipment under a first multiple;
determining the position of a target object in the first image as the initial position of the target object in a shooting area of the holder equipment;
Selecting a triangle model matched with the actual distance between the tripod head equipment and the target object, and calculating rotation parameters required by the tripod head equipment when the target object is moved from the initial position to the target position of the shooting area, wherein the vertexes of the triangle model are used for respectively representing the positions of the tripod head equipment, the target object before relative movement and the target object after relative movement in space;
and controlling the cradle head equipment by using the rotation parameters so as to enable the cradle head equipment to rotate to the target position of the target object in the shooting area, and shooting the target position with a second multiple to obtain a second image.
Further, the determining the position of the target object in the first image as the initial position of the target object in the image capturing area of the pan-tilt device includes:
Displaying the first image;
Receiving a first user operation for the first image;
Responding to the first user operation to draw a target area in the area where the target object is located in the first image;
and determining the central position of the target area as the initial position of the target object in the shooting area of the cradle head equipment.
Further, the determining the position of the target object in the first image as the initial position of the target object in the image capturing area of the pan-tilt device includes:
preprocessing the first image, and extracting image features from the first image;
inputting the image characteristics into a target recognition model to recognize a target area where a target object in the first image is located;
and determining the central position of the target area as the initial position of the target object in the shooting area of the cradle head equipment.
Further, the selecting a triangle model matched with the actual distance between the cradle head device and the target object, and calculating rotation parameters required by the cradle head device when the target object moves from the initial position to the target position of the image capturing area, including:
taking the difference value between the initial position and the target position as a pixel moving distance of the target object moving in the image pickup area;
Converting the pixel moving distance into the actual moving distance of the target object relative to the movement of the cradle head equipment according to the linear corresponding relation between the actual distance and the pixel distance, which is measured in advance;
Selecting a triangle model based on the actual distance between the tripod head equipment and the target object, wherein the triangle model is used for representing a space triangle relation formed among the tripod head equipment, the target object before relative movement and the target object after relative movement, and calculating the rotation angle of the tripod head equipment by using the actual movement distance;
and taking the rotation angle as a rotation parameter of the cradle head equipment.
Further, when the actual distance between the tripod head equipment and the target object exceeds a preset distance threshold, the tripod head equipment, the target object before relative movement and the target object after relative movement form an isosceles triangle in space;
The selecting a triangle model based on the actual distance between the tripod head equipment and the target object, for representing a spatial triangle relationship formed among the tripod head equipment, the target object before relative movement and the target object after relative movement, and calculating to obtain the rotation angle of the tripod head equipment by using the actual movement distance, includes:
determining an actual distance between the cradle head device and the target object;
taking the actual distance as the length of the waist of the isosceles triangle and the actual moving distance as the length of the bottom edge of the isosceles triangle, and obtaining the angle of the diagonal angle of the bottom edge based on the triangle cosine theorem;
and determining the angle of the opposite angle of the bottom edge as the rotation angle of the cradle head equipment.
Further, the determining the actual distance between the pan-tilt device and the target object includes:
acquiring the actual size of the target object;
determining a pixel size of the target object in the first image;
determining the ratio between the pixel size and the actual size as an actual ratio;
When the calibration object is in a unit distance from the cradle head equipment, the ratio between the actual distance corresponding to the calibration object and the pixel distance, which is measured in advance, is used as a reference ratio;
Determining the ratio between the actual proportion and the reference proportion as a distance multiple;
and taking the product of the distance multiple and the unit distance as the actual distance between the cradle head equipment and the target object.
Further, the determining the actual distance between the pan-tilt device and the target object includes:
moving the cradle head equipment to a preset point around the target object;
obtaining a target distance between the preset point and the target object;
and taking the target distance as an actual distance between the cradle head equipment and the target object.
Further, when the actual distance between the cradle head device and the target object is lower than or equal to a preset distance threshold, forming a right triangle by the cradle head device, the target object before relative movement and the target object after relative movement in space; the right-angle triangle is a right-angle triangle, and corresponds to a connecting line between the target object before the relative movement and the target object after the relative movement; the other right-angle side corresponds to a connecting line between the target object after relative movement and the cradle head equipment; hypotenuse, correspond to the connecting line between said cloud terrace apparatus and said goal object before relative movement;
The selecting a triangle model based on the actual distance between the tripod head equipment and the target object, for representing a spatial triangle relationship formed among the tripod head equipment, the target object before relative movement and the target object after relative movement, and calculating to obtain the rotation angle of the tripod head equipment by using the actual movement distance, includes:
taking the actual distance between the target object after the relative movement and the cradle head equipment as a first distance;
calculating the ratio of the actual moving distance to the first distance;
Calculating an arctangent value corresponding to the ratio;
and determining the arctangent value as the rotation angle of the cradle head equipment.
Further, before converting the pixel moving distance into the actual moving distance of the target object moving relative to the pan-tilt device according to the predetermined linear correspondence between the actual distance and the pixel distance, the method further includes:
Calibrating an image pickup device in the cradle head equipment to obtain a linear corresponding relation between an actual distance and a pixel distance, which corresponds to the image pickup device.
Further, before the second image is obtained by shooting at the second multiple, the method further comprises:
determining the target proportion of the target object in the image capturing area when the target object is moved to the target position of the image capturing area;
determining that the target object occupies the original proportion of the first image in the first image;
taking the ratio between the target proportion and the original proportion as the magnification;
and taking the product of the first multiple and the magnification as a second multiple.
In order to achieve the above object, a second aspect of the present application provides a pan/tilt target conversion control device, including:
the first image acquisition module is used for acquiring a first image of a target object shot by the cradle head equipment under a first multiple;
The initial position determining module is used for determining the position of the target object in the first image as the initial position of the target object in the shooting area of the cradle head equipment;
The rotation parameter calculation module is used for selecting a triangle model matched with the actual distance between the tripod head equipment and the target object, and calculating rotation parameters required by the tripod head equipment when the target object is moved from the initial position to the target position of the shooting area, wherein the vertexes of the triangle model are used for respectively representing the positions of the tripod head equipment, the target object before relative movement and the target object after relative movement in space;
And the second image acquisition module is used for controlling the cradle head equipment to rotate according to the rotation parameters so as to enable the cradle head equipment to rotate to the target position of the target object in the shooting area, and shooting the second image of the target object in a second multiple.
To achieve the above object, a third aspect of the present application provides a pan/tilt target conversion control apparatus, including: a memory and one or more processors;
the memory is used for storing one or more programs;
When the one or more programs are executed by the one or more processors, the one or more processors implement the pan-tilt target conversion control method according to any one of the first aspects.
To achieve the above object, a fourth aspect of the present application provides a storage medium containing computer-executable instructions for performing the pan-tilt target conversion control method according to any one of the first aspects when executed by a computer processor.
From the above, according to the technical scheme provided by the application, the first image of the target object shot by the cradle head equipment under the first multiple is obtained; determining the position of the target object in the first image as the initial position of the target object in the shooting area of the cradle head equipment; selecting a triangle model matched with the actual distance between the tripod head equipment and the target object, and calculating rotation parameters required by the tripod head equipment when the target object is moved from the initial position to the target position of the shooting area, wherein the vertexes of the triangle model are used for respectively representing the positions of the tripod head equipment, the target object before relative movement and the target object after relative movement in space; the cradle head equipment is controlled to rotate according to the rotation parameters, so that the cradle head equipment rotates to the target position where the target object is located in the shooting area, a second image of the target object is obtained through shooting with a second multiple, the rapidity and the accuracy of determining the rotation parameters of the cradle head equipment are realized, the cradle head target conversion control is more stable and smooth, and the technical effect of control time is saved.
Drawings
Fig. 1A is a flowchart of a method for controlling conversion of a pan/tilt target according to embodiment 1 of the present invention;
Fig. 1B is a schematic diagram of a target object in an initial position according to embodiment 1 of the present invention;
Fig. 1C is a schematic diagram of a target object moving in an image capturing area according to embodiment 1 of the present invention;
Fig. 1D is a schematic diagram of spatial positions of a pan-tilt device, a target object before relative movement, and a target object after relative movement according to embodiment 1 of the present invention;
Fig. 1E is a schematic diagram of spatial positions of another pan-tilt device, a target object before relative movement, and a target object after relative movement according to embodiment 1 of the present invention;
Fig. 2 is a schematic structural diagram of a pan-tilt target conversion control device according to embodiment 2 of the present invention;
fig. 3 is a schematic structural diagram of a pan-tilt target conversion control device according to embodiment 3 of the present invention.
Detailed Description
The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Example 1
The application provides a method for controlling the conversion of a cradle head target, which can be executed by a cradle head target conversion control device, wherein the cradle head target conversion control device can be realized in a software and/or hardware mode and is integrated in cradle head target conversion control equipment. Optionally, the pan-tilt target conversion control device may be an upper computer of the pan-tilt device, where the upper computer includes, but is not limited to, a terminal such as a computer, a server, and the like. In this embodiment, the details will be described by taking the pan-tilt target conversion control device as an example.
Fig. 1A is a flowchart of a method for controlling target conversion of a pan/tilt head according to embodiment 1 of the present invention. Referring to fig. 1A, the method may include the steps of:
S110, acquiring a first image of a target object shot by the cradle head device under a first multiple.
The embodiment can be applied to remote control of the cradle head equipment in the obstacle surmounting robot. Specifically, the obstacle surmounting robot can travel on a high-voltage transmission line and record images or pictures in the high-voltage transmission line through a cradle head device. Furthermore, the self or surrounding environment of the high-voltage transmission line can be analyzed based on the image or the image, so that fault detection and maintenance can be timely carried out.
Generally, on one hand, the upper computer can send a control instruction to the obstacle surmounting robot so as to control the obstacle surmounting robot to travel along the high-voltage transmission line; on the other hand, the control instruction can be sent to the cradle head equipment on the obstacle surmounting robot so as to control the operations of rotation, shooting and the like of the cradle head equipment.
In this embodiment, the target object is an object that needs focus detection and is focused to be photographed. For example, for an obstacle surmounting robot for detecting a high voltage transmission line, the target object is a key component in the high voltage transmission line, such as an insulator string with a vertically suspended tower head, and also such as a damper.
In this embodiment, the upper computer may control the cradle head device to capture the first image on the high-voltage transmission line during the process of controlling the obstacle surmounting robot to travel.
It should be noted that in this embodiment, the first multiple may be configured to be a low multiple, for example, 1 multiple, so that the pan-tilt device may acquire as many fields of view as possible on the travel path of the obstacle surmounting robot.
In addition, in this embodiment, a target object in an image, such as the target object in the first image described in this embodiment, refers to a pixel point corresponding to an actual target object in the first image.
S120, determining the position of the target object in the first image as the initial position of the target object in the shooting area of the cradle head equipment.
In this embodiment, the relative position of the target object in the image capturing area refers to a position where the actual target object is projected to the image capturing area through an imaging element such as a convex lens in the image capturing apparatus. Further, for a photosensitive type image pickup apparatus, the image pickup area may correspond to a photosensitive area of the image pickup apparatus. In short, the imaging area corresponds to an area range that can be imaged by the imaging device, that is, an area range that can be seen by an image captured by the imaging device.
In general, the target objects are distributed at different positions in the travel path of the obstacle surmounting robot, and when the cradle head device uses the same rotation angle for recording, the target objects easily appear at the edge position of the image pickup area, which is not beneficial to clearly acquiring the image corresponding to the target objects, and is also more unfavorable for performing fault analysis on the target objects based on the image later.
In this embodiment, in order to timely acquire a clear image of the target object, the pan-tilt device may be controlled to rotate, i.e. the imaging angle of the pan-tilt device may be adjusted, so that the target object falls at a target position in the imaging area. The target position is closer to the center of the imaging region than the edge position. Of course, the target position can be set directly as the geometric center of the imaging region.
Fig. 1B is a schematic diagram of a target object in an initial position according to embodiment 1 of the present invention. Referring to fig. 1B, in this embodiment, a target area where a target object is located may be identified in a first image, and a geometric center of the target area is taken as an initial position of the target object in an image capturing area. As in fig. 1B, the target area of 3 target objects (insulator strings) is identified using 3 rectangular boxes, the geometric center of which can be used as the position of the target object in the first image, that is, the initial position of the target object in the image capturing area.
Specifically, in this embodiment, two ways may be adopted to determine the initial position of the target object in the image capturing area of the pan-tilt device, including: manual and automatic.
1. By hand
In this embodiment, the upper computer may receive the first image from the pan-tilt device and display the first image; then, receiving a first user operation for the first image, such as a user operation of a picture frame; responding to a first user operation, and drawing a target area in an area where a target object is located in a first image; and determining the central position of the target area as the initial position of the target object in the shooting area of the cradle head equipment.
Wherein the center position is the geometric center of the target area. If the target area is a rectangular frame, the center position is the center of the rectangular frame; for another example, when the target area is a circular frame, the center position is the center of the circular frame.
2. Automatic mode
In this embodiment, the preprocessing operation may be performed on the first image, and the image features may be extracted from the first image; inputting the image characteristics into a target recognition model to recognize a target area where a target object in the first image is located; and determining the central position of the target area as the initial position of the target object in the shooting area of the cradle head equipment.
The preprocessing operation can use an image noise reduction algorithm to remove noise interference generated by an imaging device or an external environment so as to obtain a first image with less noise and high definition; an image enhancement algorithm may also be used to increase the contrast of the first image, increase or decrease the edge information of the target object, etc. Specifically, the contrast ratio can be stretched to enhance the dynamic range of gray scale; histogram equalization may also be used to equalize the global image, etc.
Further, the image features may be edge features in the first image, or the like. Wherein, the target recognition model can adopt target detection algorithms such as MobileNet-SSD, SSD, YOLO, faster-RCNN, FPN and the like. Preferably, the object recognition model may be set as a model using MobileNet-SSD object detection algorithm.
Further, after the image features are input into the target recognition model, the type of the target object can be recognized through the recognition processing of the target recognition model, and the position and the size of the target area of the target object in the image can also be recognized. Accordingly, a rectangular box can be drawn in the first image according to the position and the size, and the rectangular box is used for identifying the target area where the target object is located.
Further, the center position of the target area may be used as the initial position of the target object in the imaging area of the pan/tilt device.
S130, calculating rotation parameters required by the cradle head equipment when the target object moves from the initial position to the target position of the shooting area.
In this embodiment, the rotation parameter may correspond to a rotation angle. In addition, the cradle head device can rotate in several dimensions, and can correspond to several groups of rotation angles. For example, the pan-tilt device may rotate in a vertical direction and a horizontal direction, and the rotation parameter may correspond to a rotation angle of the pan-tilt device in the vertical direction and a rotation angle of the pan-tilt device in the horizontal direction.
In this embodiment, when the cradle head device rotates, the corresponding image in the image capturing area of the image capturing apparatus on the cradle head device also changes. That is, as the cradle head device rotates, the relative position of the target object in the imaging area also rotates, and the two are in linear correspondence.
In a specific embodiment, fig. 1C is a schematic diagram of a target object moving in an image capturing area according to embodiment 1 of the present invention. Referring to fig. 1C, in the present embodiment, when the cradle head apparatus photographs a high-voltage power transmission line at a first photographing angle, a target object (damper) is located at the upper right of the photographing region. When the pan/tilt apparatus is rotated from the first imaging angle to the second imaging angle by the rotation angle α, the target object (damper) is located at the geometric center of the imaging area.
In this embodiment, based on the linear correspondence of the rotation, the rotation parameter required by the pan/tilt device, such as the rotation angle α in fig. 1C, can be calculated when the target object moves to the target position. It should be noted that, when the pan-tilt device can rotate in the vertical direction and the horizontal direction, the rotation angle α may be denoted as (αxy), where αx may be used to denote the rotation angle in the horizontal direction, and αy may be used to denote the rotation angle in the vertical direction.
Further, in this embodiment, the step S130 may be refined into steps S131 to S134:
S131, selecting a triangle model matched with the actual distance between the cradle head equipment and the target object, and calculating rotation parameters required by the cradle head equipment when the target object is moved from the initial position to the target position of the shooting area;
the vertex of the triangle model is used for respectively representing the positions of the tripod head equipment, the target object before relative movement and the target object after relative movement in space.
In this embodiment, when the pan-tilt device can rotate in the vertical direction and the horizontal direction, the rotation angle in the horizontal direction and the rotation angle in the vertical direction can be solved respectively. And then, controlling the cradle head equipment to rotate in the vertical direction and the horizontal direction respectively.
In this embodiment, a description will be given of an example of solving a rotation parameter (rotation angle) in the horizontal direction.
Specifically, assuming that the horizontal coordinate of the initial position of the target object in the image pickup area is xo and the horizontal coordinate of the target position is x1, the pixel shift distance dpixx by which the target object is shifted in the image pickup area can be expressed as (x1-x0).
S132, converting the pixel moving distance into the actual moving distance of the target object relative to the holder equipment according to the linear corresponding relation between the actual distance and the pixel distance, which is measured in advance.
In this embodiment, the imaging device in the pan-tilt device may be calibrated to obtain a linear correspondence between the actual distance and the pixel distance corresponding to the imaging device.
By way of example, the calibration method may be as follows: a standard is provided and the standard may be a square cardboard of 0.284 x 0.289 m. And taking a picture when the distance D between the image pickup device and the calibration object is 1,2, 3, 4 and 5m respectively.
Further, the pixel distance of the calibration object in the photographed image can be obtained, and the proportional relation between the actual distance and the pixel distance can be calculated.
For example, the data obtained are:
1. The distance D=1m between the camera device and the calibration object, the pixel distance x=200pixels and the pixel distance y=300pixels of the calibration object in the image, wherein x represents wide and y represents high;
2. the distance D=2m between the camera device and the calibration object, the pixel distance x=100pixels of the calibration object in the image, and the pixel distance y=150pixels, wherein x represents wide and y represents high;
3. the distance D=3m between the camera device and the calibration object, the pixel distance x=66pixels of the calibration object in the image, and the pixel distance y=100 pixels, wherein x represents wide and y represents high;
4. The distance D=4m between the camera and the calibration object, wherein the pixel distance x=50pixels and the pixel distance y=75pixels of the calibration object in the image, x represents wide and y represents high;
5. the distance d=5m of the camera from the calibration object, the pixel distance x=40 pixels of the calibration object in the image, and y=60 pixels, wherein x represents wide and y represents high.
From the above data, it can be deduced that, at a vertical distance of n meters from the imaging device to the target object, the actual distance represented by 1 pixel is: x=0.285 ≡ (200/n), y=0.289 ≡ (300/n).
That is, 0.285/200 may be taken as a linear multiple in the horizontal direction between the actual distance and the pixel distance at the unit distance, and 0.289/300 may be taken as a linear multiple in the vertical direction between the actual distance and the pixel distance at the unit distance.
Further, assuming that a vertical distance between the imaging device in the pan-tilt apparatus and the target object is R1 (in meters), and a pixel movement distance dpixx of the target object in a horizontal direction, an actual movement distance of the target object with respect to the pan-tilt apparatus may be expressed as:
dx=Px×dpixx×R1
Where Px is a linear multiple in the horizontal direction between the actual distance and the pixel distance at a unit distance (e.g., 1 m), it may be Px =0.285/200= 0.001425.
In the calibration process, shooting can be performed by adopting a preset multiple, such as 1 time. Further, in an embodiment, to increase the accuracy of the calibration, the calibration may be performed under a plurality of preset multiples, and then, an average processing manner is used for the results of the calibration of the plurality of preset multiples, to obtain a final calibration result under 1-fold magnification.
For example, the calibration is performed when the amplification factors are 2 times, 4 times, 8 times, 10 times, 15 times and 20 times, further, the calibration results with relatively large errors of 2 times and 20 times are removed, and the average value of the calibration results corresponding to the middle 4 times, 8 times, 10 times and 15 times is taken as the final calibration result.
Furthermore, after calibration, error compensation values of 8 coordinate values (x, y) can be given in one or more quadrants, such as 1-4, so that the detection accuracy is more accurate by about 5%.
S133, selecting a triangle model based on the actual distance between the tripod head equipment and the target object, wherein the triangle model is used for representing a space triangle relation formed among the tripod head equipment, the target object before relative movement and the target object after relative movement, and calculating the rotation angle of the tripod head equipment by using the actual movement distance.
In this embodiment, the positions of the pan-tilt device, the target object before the relative movement, and the target object after the relative movement in space may be set as a triangle model in space, and further, in the triangle model, the triangle relationship may be used to determine the lengths of each angle or side of the triangle model.
In this embodiment, the triangle model may include: isosceles triangle or right triangle.
Further, the type of the triangle model may be selected according to whether the actual distance between the pan-tilt device and the target object exceeds a preset distance threshold. If the actual distance between the tripod head equipment and the target object exceeds a preset distance threshold, forming an isosceles triangle in space among the tripod head equipment, the target object before relative movement and the target object after relative movement; for another example, when the actual distance between the pan-tilt device and the target object is less than or equal to a preset distance threshold, the pan-tilt device, the target object before relative movement and the target object after relative movement form a right triangle in space.
1. Isosceles triangle
In this embodiment, fig. 1D is a schematic diagram of spatial positions of a pan-tilt device, a target object before relative movement, and a target object after relative movement according to embodiment 1 of the present invention. When the pan-tilt device is far enough from the target object, the vertical distance between the pan-tilt device and the target object before and after rotation can be considered as equal, i.e. the pan-tilt device, the target object before relative movement and the target object after relative movement form an isosceles triangle in space.
In an embodiment, an actual distance between the pan-tilt device and the target object may be determined; taking the actual distance as the length of the waist of the isosceles triangle and the actual moving distance as the length of the base of the isosceles triangle, and obtaining the angle of the opposite angle of the base based on the triangle cosine theorem; and determining the angle of the opposite angle of the bottom edge as the rotation angle of the cradle head device.
Referring to fig. 1D, the actual distance between the pan-tilt device and the target object before the relative movement is Dx, and the actual distance between the pan-tilt device and the target object after the relative movement is also Dx. The distance between the target objects before and after the relative movement is lx. The cosine value of the rotation angle radx can be expressed as:
Further, by solving the inverse cosine value, the rotation angle radx of the pan-tilt device can be determined.
Further, determining the actual distance between the pan-tilt device and the target object may also use the following two methods, which may include: image processing acquisition method, direct acquisition method
1.1 Image processing acquisition method
Specifically, the actual size of the target object may be obtained; determining a pixel size of the target object in the first image; determining the ratio of the pixel size to the actual size as the actual ratio; when the unit distance between the calibration object and the holder equipment is measured, the ratio between the actual distance corresponding to the calibration object and the pixel distance is measured in advance and is used as a reference ratio; determining the ratio of the actual proportion to the reference proportion as a distance multiple; and taking the product of the distance multiple and the unit distance as the actual distance between the cradle head equipment and the target object.
1.2 Direct acquisition method
Specifically, the holder device may be moved to a preset point around the target object, and coordinates of the preset point may be stored in a database, and a target distance from the preset point to the target object may be stored in the database; further, the target distance between the preset point and the target object can be obtained from the database; and taking the target distance as the actual distance between the cradle head equipment and the target object.
2. Right triangle
In this embodiment, fig. 1E is a schematic diagram of spatial positions of another pan-tilt device, a target object before relative movement, and a target object after relative movement according to embodiment 1 of the present invention. When the holder device is relatively close to the target object, the holder device, the target object before relative movement and the target object after relative movement form a right triangle in space.
Referring to fig. 1E, a right angle side dx of the right triangle corresponds to a line connecting the target object before the relative movement and the target object after the relative movement; the other right-angle side R1 corresponds to a connecting line between the target object after relative movement and the cradle head equipment; hypotenuse R0 corresponds to the line between the target object and the holder device before relative movement. The length of each edge corresponds to the distance. In addition, h in fig. 1E is high on the oblique side R0.
Referring to the trigonometric relationship in fig. 1E, the following formula can be obtained:
dpixx=abs(x1-x0) (2)
dx=Px×dpixx×R1 (3)
Wherein arctan is an arctangent function; abs is an absolute function; the horizontal coordinate of the initial position of the target object in the image pickup area is xo, and the horizontal coordinate of the target position is x1,dpixx, which is the pixel moving distance of the target object moving in the image pickup area; px is a linear multiple in the horizontal direction between the actual distance and the pixel distance at a unit distance, and may be Px=0.285/200=0.001425;radx is a rotation angle of the pan/tilt device.
In connection with formulas (1) - (3), the rotation angle radx of the pan-tilt apparatus can be expressed as:
radx=arctan(Px×dpixx)。
That is, the actual distance between the target object after the relative movement and the pan/tilt device may be used as the first distance, i.e., R1; calculating the ratio of the actual moving distance dx to the first distance R1, i.e., Px×dpixx; calculating an arctangent value corresponding to the ratio (Px×dpixx); and determining the arctangent value as the rotation angle of the cradle head equipment.
In this embodiment, a more matched spatial triangle relationship may be selected based on the actual distance between the pan-tilt device and the target object, which further increases the accuracy of detection. In addition, because the space triangle relationship expressed by the right triangle is used for calculating the rotation angle, the parameters are smaller, the calculation amount is less, and when the actual distance between the holder equipment and the target object is closer, the isosceles triangle can be switched to the right triangle, so that the calculation of the rotation angle is more flexible, and the total time is less.
S134, taking the rotation angle as a rotation parameter of the cradle head equipment.
In this embodiment, when the rotation angle is used as the rotation parameter of the pan-tilt device, the pan-tilt device may directly rotate according to the rotation angle when receiving the rotation parameter.
And S140, controlling the cradle head equipment to rotate according to the rotation parameters, so that the cradle head equipment rotates to the target position where the target object is located in the image capturing area, and capturing a second image of the target object by a second multiple.
In general, the second magnification may be set to be larger than the first magnification so that the target object may occupy a larger proportion of the second image after rotation, that is, the target object may be more clearly observed from the second image.
In one embodiment, it may be determined that the target object occupies a target proportion of the imaging area when the target object is moved to a target position of the imaging area; determining that a target object occupies the original proportion of the first image in the first image; taking the ratio of the target proportion to the original proportion as the magnification; the product of the first multiple and the magnification is taken as a second multiple.
In general, under the condition of higher multiple, the cradle head device is rotated to adjust the relative position of the target object in the image capturing area, so that stronger shake is easily generated. The reason why this shake occurs is that as the magnification of the image pickup apparatus increases, there are more displacement pixels of the target object in the image pickup area per unit rotation angle.
Based on the above factors, if the rotation parameters of the pan-tilt device are manually adjusted, the relative position of the target object in the image capturing area is used as feedback, and then the rotation parameters of the pan-tilt device are manually adjusted until the target position of the target object in the image capturing area, multiple unsmooth attempts are required, and the target object cannot be necessarily adjusted to the target position in the image capturing area finally.
In this embodiment, the rotation parameter of the pan-tilt device is calculated directly by image processing, and the rotation of the pan-tilt device is controlled according to the rotation parameter, so as to adjust the target object to the target position in the image capturing area; and then, the cradle head equipment is set to shoot a second image by adopting a second multiple higher than the first multiple, so that the whole cradle head target conversion control process, the shooting process is smoother and more stable, the rapidness and the accuracy of determining the rotation parameters of the cradle head equipment are realized, and the technical effect of controlling time is saved.
In particular, for the case that a plurality of target objects exist in the first image, the rotation parameters of the cradle head device corresponding to each target object can be calculated respectively, so that the problem that the time is consumed and the working efficiency is low due to the fact that the rotation angle of the cradle head device is adjusted manually under the condition that the target objects are more and need to be fine-tuned one by one is solved.
Furthermore, the shooting sequence of a plurality of target objects can be optimized, the total rotation angle of the cradle head equipment is reduced, and the time cost and the power cost are greatly saved.
In addition, in general, the outdoor environment network is not good, the network connection is often disconnected, if the remote control of the cradle head equipment is always required to rotate for many times, the control effect of the cradle head equipment is poor due to the disconnected network connection, and the working efficiency is reduced.
Example 2
Fig. 2 is a schematic structural diagram of a pan-tilt target conversion control device according to embodiment 2 of the present application. The application provides a holder target conversion control device which can be realized in a software and/or hardware mode and is integrated in holder target conversion control equipment. Optionally, the pan-tilt target conversion control device may be an upper computer of the pan-tilt device, where the upper computer includes, but is not limited to, a terminal such as a computer, a server, and the like. In this embodiment, the details will be described by taking the pan-tilt target conversion control device as an example.
Referring to fig. 2, the pan-tilt target conversion control device may specifically include the following structure: a first image acquisition module 210, an initial position determination module 220, a rotation parameter calculation module 230, and a second image acquisition module 240.
A first image obtaining module 210, configured to obtain a first image of a target object captured by the pan-tilt device under a first multiple;
An initial position determining module 220, configured to determine a position of the target object in the first image as an initial position of the target object in a camera shooting area of the pan-tilt device;
a rotation parameter calculation module 230, configured to select a triangle model that matches an actual distance between the pan-tilt device and the target object, and calculate rotation parameters required by the pan-tilt device when the target object is moved from the initial position to a target position of the image capturing area, where vertices of the triangle model are used to respectively represent positions of the pan-tilt device, the target object before relative movement, and the target object after relative movement in space;
And a second image obtaining module 240, configured to control the pan-tilt device to rotate according to the rotation parameter, so that the pan-tilt device rotates to the target position where the target object is located in the image capturing area, and capture a second image of the target object with a second multiple.
In this embodiment, the rotation parameter of the pan-tilt device is calculated by directly performing image processing, and the rotation of the pan-tilt device is controlled according to the rotation parameter, so as to adjust the target object to the target position in the image capturing area; and then, the cradle head equipment is set to shoot a second image by adopting a second multiple higher than the first multiple, so that the whole cradle head target conversion control process, the shooting process is smoother and more stable, the rapidness and the accuracy of determining the rotation parameters of the cradle head equipment are realized, and the technical effect of controlling time is saved.
Based on the above technical solution, the initial position determining module 220 includes:
and the first image display unit is used for displaying the first image.
A first user operation receiving unit for receiving a first user operation for the first image.
And the first user operation response unit is used for responding to the first user operation to draw a target area in the area where the target object is located in the first image.
And the first initial position determining unit is used for determining the central position of the target area as the initial position of the target object in the shooting area of the cradle head equipment.
On the basis of the above technical solution, the initial position determining module 220 includes:
and the image feature extraction unit is used for carrying out preprocessing operation on the first image and extracting image features from the first image.
And the target area identification unit is used for inputting the image characteristics into a target identification model so as to identify the target area where the target object is located in the first image.
And the second initial position determining unit is used for determining the central position of the target area as the initial position of the target object in the image pickup area of the cradle head equipment.
Based on the above technical solution, the rotation parameter calculation module 230 includes:
and a pixel movement distance determining unit configured to use a difference between the initial position and the target position as a pixel movement distance by which the target object moves within the image capturing area.
And the actual moving distance determining unit is used for converting the pixel moving distance into the actual moving distance of the target object relative to the holder equipment according to the linear corresponding relation between the actual distance and the pixel distance, which is measured in advance.
The rotation angle determining unit is used for selecting a triangle model based on the actual distance between the tripod head equipment and the target object, representing the space triangle relation formed among the tripod head equipment, the target object before relative movement and the target object after relative movement, and calculating the rotation angle of the tripod head equipment by using the actual movement distance.
And the rotation parameter determining unit is used for taking the rotation angle as the rotation parameter of the cradle head equipment.
On the basis of the technical scheme, when the actual distance between the tripod head equipment and the target object exceeds a preset distance threshold, the tripod head equipment, the target object before relative movement and the target object after relative movement form an isosceles triangle in space; a rotation angle determination unit including:
And the actual distance determining subunit is used for determining the actual distance between the cradle head equipment and the target object.
And the base diagonal determination subunit is used for taking the actual distance as the length of the waist of the isosceles triangle and the actual moving distance as the length of the base of the isosceles triangle, and obtaining the angle of the base diagonal angle based on the triangle cosine theorem.
And the first rotation angle determining subunit is used for determining the angle of the bottom edge diagonal angle as the rotation angle of the cradle head equipment.
In an embodiment, the actual distance determining subunit may be specifically configured to: acquiring the actual size of the target object; determining a pixel size of the target object in the first image; determining the ratio between the pixel size and the actual size as an actual ratio; when the calibration object is in a unit distance from the cradle head equipment, the ratio between the actual distance corresponding to the calibration object and the pixel distance, which is measured in advance, is used as a reference ratio; determining the ratio between the actual proportion and the reference proportion as a distance multiple; and taking the product of the distance multiple and the unit distance as the actual distance between the cradle head equipment and the target object.
In a further embodiment, the actual distance determination subunit may be further specifically configured to: moving the cradle head equipment to a preset point around the target object; obtaining a target distance between the preset point and the target object; and taking the target distance as an actual distance between the cradle head equipment and the target object.
On the basis of the technical scheme, when the actual distance between the cradle head equipment and the target object is lower than or equal to a preset distance threshold value, forming a right triangle in space by the cradle head equipment, the target object before relative movement and the target object after relative movement; the right-angle triangle is a right-angle triangle, and corresponds to a connecting line between the target object before the relative movement and the target object after the relative movement; the other right-angle side corresponds to a connecting line between the target object after relative movement and the cradle head equipment; and the bevel edge corresponds to a connecting line between the target object and the holder equipment before relative movement.
The rotation angle determining unit is specifically configured to:
taking the actual distance between the target object after the relative movement and the cradle head equipment as a first distance;
calculating the ratio of the actual moving distance to the first distance;
Calculating an arctangent value corresponding to the ratio;
and determining the arctangent value as the rotation angle of the cradle head equipment.
On the basis of the technical scheme, the device further comprises:
The calibration module is used for calibrating the image pickup device in the cradle head equipment before converting the pixel moving distance into the actual moving distance of the target object relative to the cradle head equipment according to the linear corresponding relation between the actual distance and the pixel distance, which is determined in advance, so as to obtain the linear corresponding relation between the actual distance and the pixel distance, which corresponds to the image pickup device.
On the basis of the technical scheme, the device further comprises:
And the target proportion determining module is used for determining the target proportion of the target object in the image capturing area when the target object is moved to the target position of the image capturing area before the second image is captured at the second multiple.
And the original proportion determining module is used for determining that the target object occupies the original proportion of the first image in the first image.
And the amplification factor determining module is used for taking the ratio between the target proportion and the original proportion as the amplification factor.
And the second multiple determining module is used for taking the product of the first multiple and the amplification multiple as a second multiple.
Example 3
Fig. 3 is a schematic structural diagram of a pan-tilt target conversion control device according to embodiment 3 of the present invention. As shown in fig. 3, the cradle head target conversion control apparatus includes: a processor 30, a memory 31, an input device 32 and an output device 33. The number of processors 30 in the pan/tilt target conversion control device may be one or more, and one processor 30 is taken as an example in fig. 3. The number of memories 31 in the cradle head target conversion control device may be one or more, and one memory 31 is taken as an example in fig. 3. The processor 30, the memory 31, the input means 32 and the output means 33 of the holder target conversion control device may be connected by a bus or other means, for example in fig. 3 by a bus connection. The cradle head target conversion control device is an upper computer of cradle head equipment, and the upper computer can be a computer, a server and the like. In this embodiment, the cradle head target conversion control device is used as a server for detailed description, and the server may be an independent server or a cluster server.
The memory 31 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and a module corresponding to a method for controlling the conversion of a pan-tilt target according to any embodiment of the present invention (for example, a first image acquisition module 210, an initial position determination module 220, a rotation parameter calculation module 230, and a second image acquisition module 240 in a pan-tilt target conversion control device). The memory 31 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the device, etc. In addition, the memory 31 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 31 may further include memory located remotely from processor 30, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 32 may be used to receive input digital or character information and to generate key signal inputs related to setting and function control of the pan/tilt target conversion control device, and may be a camera for capturing images and a pickup device for capturing audio data. The output means 33 may comprise an audio device such as a loudspeaker. The specific composition of the input device 32 and the output device 33 may be set according to the actual situation.
The processor 30 executes various functional applications of the apparatus and data processing by running software programs, instructions and modules stored in the memory 31, i.e., implements the above-described pan-tilt target conversion control method.
Example 4
Embodiment 4 of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a pan-tilt target conversion control method, including:
acquiring a first image shot by the cradle head equipment under a first multiple;
determining the position of a target object in the first image as the initial position of the target object in a shooting area of the holder equipment;
Selecting a triangle model matched with the actual distance between the tripod head equipment and the target object, and calculating rotation parameters required by the tripod head equipment when the target object is moved from the initial position to the target position of the shooting area, wherein the vertexes of the triangle model are used for respectively representing the positions of the tripod head equipment, the target object before relative movement and the target object after relative movement in space;
and controlling the cradle head equipment by using the rotation parameters so as to enable the cradle head equipment to rotate to the target position of the target object in the shooting area, and shooting the target position with a second multiple to obtain a second image.
Of course, the storage medium containing the computer executable instructions provided by the embodiment of the invention is not limited to the operation of the method for controlling the conversion of the target of the cradle head, but can also execute the related operation in the method for controlling the conversion of the target of the cradle head provided by any embodiment of the invention, and has the corresponding functions and beneficial effects.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, and the computer software product may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, where the instructions include a number of instructions for causing a computer device (which may be a robot, a personal computer, a server, or a network device, etc.) to execute the method for controlling target conversion of a pan-tilt according to any embodiment of the present invention.
It should be noted that, in the above-mentioned pan-tilt target conversion control device, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, reference to the term "in one embodiment," "in another embodiment," "exemplary," or "in a particular embodiment," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While the invention has been described in detail in the foregoing general description, embodiments and experiments, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (10)

The vertex of the triangle model is used for respectively representing the positions of the tripod head equipment, the target object before relative movement and the target object after relative movement in space; selecting the type of the triangle model according to whether the actual distance between the cradle head equipment and the target object exceeds a preset distance threshold; when the actual distance between the tripod head equipment and the target object exceeds a preset distance threshold, forming an isosceles triangle in the space among the tripod head equipment, the target object before relative movement and the target object after relative movement; when the actual distance between the cradle head equipment and the target object is lower than or equal to a preset distance threshold value, forming a right triangle in space by the cradle head equipment, the target object before relative movement and the target object after relative movement;
CN202011543459.8A2020-12-232020-12-23Cloud deck target conversion control method, device, equipment and storage mediumActiveCN112714287B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202011543459.8ACN112714287B (en)2020-12-232020-12-23Cloud deck target conversion control method, device, equipment and storage medium
PCT/CN2021/099420WO2022134490A1 (en)2020-12-232021-06-10Gimbal target conversion control method, apparatus, device, and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011543459.8ACN112714287B (en)2020-12-232020-12-23Cloud deck target conversion control method, device, equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN112714287A CN112714287A (en)2021-04-27
CN112714287Btrue CN112714287B (en)2024-09-06

Family

ID=75543826

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011543459.8AActiveCN112714287B (en)2020-12-232020-12-23Cloud deck target conversion control method, device, equipment and storage medium

Country Status (2)

CountryLink
CN (1)CN112714287B (en)
WO (1)WO2022134490A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112714287B (en)*2020-12-232024-09-06广东科凯达智能机器人有限公司Cloud deck target conversion control method, device, equipment and storage medium
CN113473010B (en)*2021-06-292023-08-22浙江大华技术股份有限公司Snapshot method and device, storage medium and electronic device
CN115190237B (en)*2022-06-202023-12-15亮风台(上海)信息科技有限公司Method and device for determining rotation angle information of bearing device
CN117714883A (en)*2022-09-072024-03-15华为技术有限公司 Camera control method and related device
CN115713555B (en)*2022-11-092023-12-08中国南方电网有限责任公司超高压输电公司昆明局 Image acquisition equipment installation location determination method, device and computer equipment
CN116668830B (en)*2023-05-192023-10-24哈尔滨四福科技有限公司Method, system, equipment and medium for setting preset point of water level observation camera
CN117557167B (en)*2024-01-032024-03-19微网优联科技(成都)有限公司Production quality management method and system of cradle head machine

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20130066184A (en)*2011-12-122013-06-20현대모비스 주식회사Device and method of regulating camera angle automatically using a radar sensor
CN110633629A (en)*2019-08-022019-12-31广东电网有限责任公司清远供电局Power grid inspection method, device, equipment and storage medium based on artificial intelligence
CN111935412A (en)*2020-10-192020-11-13广东科凯达智能机器人有限公司Method, system and robot for automatically identifying and tracking inspection target

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR101051390B1 (en)*2009-08-312011-07-22주식회사 이미지넥스트 Apparatus and method for estimating object information of surveillance camera
US20110242314A1 (en)*2010-03-312011-10-06Canon Kabushiki KaishaImage taking system
US8810712B2 (en)*2012-01-202014-08-19Htc CorporationCamera system and auto focus method
CN108574825B (en)*2017-03-102020-02-21华为技术有限公司 A method and device for adjusting a PTZ camera
CN109803106B (en)*2019-01-042020-03-27安徽文香信息技术有限公司 Blackboard-based recording and broadcasting system, shooting method, storage medium and recording and broadcasting blackboard
CN110728715B (en)*2019-09-062023-04-25南京工程学院 A method for self-adaptive adjustment of the camera angle of an intelligent inspection robot
CN111161446B (en)*2020-01-102021-08-17浙江大学 An image acquisition method for an inspection robot
CN112714287B (en)*2020-12-232024-09-06广东科凯达智能机器人有限公司Cloud deck target conversion control method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20130066184A (en)*2011-12-122013-06-20현대모비스 주식회사Device and method of regulating camera angle automatically using a radar sensor
CN110633629A (en)*2019-08-022019-12-31广东电网有限责任公司清远供电局Power grid inspection method, device, equipment and storage medium based on artificial intelligence
CN111935412A (en)*2020-10-192020-11-13广东科凯达智能机器人有限公司Method, system and robot for automatically identifying and tracking inspection target

Also Published As

Publication numberPublication date
WO2022134490A1 (en)2022-06-30
CN112714287A (en)2021-04-27

Similar Documents

PublicationPublication DateTitle
CN112714287B (en)Cloud deck target conversion control method, device, equipment and storage medium
CN111627072B (en)Method, device and storage medium for calibrating multiple sensors
CN108369743B (en)Mapping a space using a multi-directional camera
CN112949478B (en)Target detection method based on tripod head camera
CN111345029B (en) A target tracking method, device, movable platform and storage medium
CN1712891B (en)Method for associating stereo image and three-dimensional data preparation system
JP4243767B2 (en) Fisheye lens camera device and image extraction method thereof
JP4268206B2 (en) Fisheye lens camera device and image distortion correction method thereof
CN110799921A (en) Filming method, device and drone
Nguyen et al.3D scanning system for automatic high-resolution plant phenotyping
CN111161446A (en)Image acquisition method of inspection robot
WO2013104800A1 (en)Automatic scene calibration
US8134614B2 (en)Image processing apparatus for detecting an image using object recognition
JP7334432B2 (en) Object tracking device, monitoring system and object tracking method
CN107843251A (en)The position and orientation estimation method of mobile robot
CN112689850A (en)Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium
CN108574825A (en) Method and device for adjusting a pan-tilt camera
CN112348775A (en)Vehicle-mounted all-round-looking-based pavement pool detection system and method
JP5183152B2 (en) Image processing device
CN116193256A (en)Determining a focal position of a camera
CN112702513B (en)Double-optical-pan-tilt cooperative control method, device, equipment and storage medium
CN117422650A (en)Panoramic image distortion correction method and device, electronic equipment and medium
CN119946434A (en) A method, system, device and medium for adaptive monitoring and identification of a designated area
CN110381257B (en)Mapping target positioning holder control method
JP2004020398A (en) Spatial information acquisition method, spatial information acquisition device, spatial information acquisition program, and recording medium recording the same

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp