Movatterモバイル変換


[0]ホーム

URL:


CN118200479B - Method and device for determining target object distance based on monitoring video - Google Patents

Method and device for determining target object distance based on monitoring video
Download PDF

Info

Publication number
CN118200479B
CN118200479BCN202410204052.4ACN202410204052ACN118200479BCN 118200479 BCN118200479 BCN 118200479BCN 202410204052 ACN202410204052 ACN 202410204052ACN 118200479 BCN118200479 BCN 118200479B
Authority
CN
China
Prior art keywords
camera
target object
slope
determining
projection line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410204052.4A
Other languages
Chinese (zh)
Other versions
CN118200479A (en
Inventor
赵月峰
请求不公布姓名
郝家品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yiji Intelligent Technology Co ltd
Original Assignee
Suzhou Yiji Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yiji Intelligent Technology Co ltdfiledCriticalSuzhou Yiji Intelligent Technology Co ltd
Priority to CN202410204052.4ApriorityCriticalpatent/CN118200479B/en
Publication of CN118200479ApublicationCriticalpatent/CN118200479A/en
Application grantedgrantedCritical
Publication of CN118200479BpublicationCriticalpatent/CN118200479B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application relates to a method, a device, a computer device, a storage medium and a computer program product for determining a target object distance based on a surveillance video. The method comprises the following steps: identifying image position information of a target object in a monitoring video image of a target scene containing a slope, and determining imaging position information of the target object in a photosensitive element; determining a vertical projection line included angle and a horizontal projection line included angle of a target object according to the field angle, the vertical azimuth angle and the imaging position information of the camera; and determining a first axial distance of the target object relative to the camera according to the vertical projection line included angle, the mounting height of the camera, the gradient of the slope and the distance between the slope and the camera, and determining a second axial distance according to the first axial distance and the horizontal projection line included angle. The method can determine the distance between the target object and the camera in the monitoring scene containing the slope based on the monitoring video, and has the effects of low cost, strong instantaneity and wide application range.

Description

Method and device for determining target object distance based on monitoring video
Technical Field
The present application relates to the field of image processing technology, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for determining a target object distance based on a surveillance video.
Background
Video monitoring has been widely used in security fields because of its advantages such as high reliability, strong timeliness, and easy viewing. In some scenarios, it is desirable to determine the distance of an object (e.g., person, car, etc.) in a monitored scene from a monitoring device to achieve geographic localization of the object. In the related art, a radar ranging method may be used to determine the distance between the target and the monitoring camera or other monitoring devices, however, the cost is high.
Disclosure of Invention
Based on the foregoing, it is necessary to provide a method, an apparatus, a computer device, a computer readable storage medium and a computer program product for determining a distance between a target object and a monitoring device (camera) based on a monitoring video, which are particularly suitable for a monitoring scene including a slope, and have low cost, strong real-time performance and wide application range.
In a first aspect, the present application provides a method for determining a target object distance based on a surveillance video. The method comprises the following steps:
Acquiring a monitoring video image of a target scene shot by a camera, and identifying a target object contained in the monitoring video image to obtain image position information of the target object in the monitoring video image; the target scene comprises a slope;
determining imaging position information of the target object in a photosensitive element of the camera according to the image position information and the size information of the photosensitive element;
determining a perpendicular projection line included angle gammaV and a horizontal projection line included angle gammaH of the target object relative to the camera according to the field angle, the vertical azimuth angle and the imaging position information of the target object in the photosensitive element of the camera;
According to the perpendicular projection line included angle gammaV, the mounting height hc of the camera, the gradient delta of the slope and the distance L0 between the slope and the camera, a first axial distance LTX between the target object and the camera is determined, and according to the first axial distance LTX and the horizontal projection line included angle gammaH, a second axial distance LTY between the target object and the camera is determined.
In one embodiment, the determining the first axial distance LTX of the target object relative to the camera according to the perpendicular projection line included angle γV, the mounting height hc of the camera, the gradient δ of the slope, and the distance L0 between the slope and the camera includes:
Determining a matched first axial distance determination strategy according to the type of the slope; the type of the slope comprises an ascending slope and a descending slope;
And determining a first axial distance LTX of the target object relative to the camera according to the perpendicular projection line included angle gammaV, the mounting height hc of the camera, the gradient delta of the slope, the distance L0 between the slope and the camera and the matched first axial distance determining strategy.
In one embodiment, the first axial distance determining strategy matched with the ascending slope is to calculate a first axial distance LTX of the target object relative to the camera according to a first formula;
The first formula is:
in one embodiment, the first axial distance determining strategy matched with the downward slope is to calculate a first axial distance LTX of the target object relative to the camera according to a second formula;
The second formula is:
In one embodiment, the method further comprises:
judging whether the target object is positioned on the slope or not;
And executing the first axial distance determination strategy step of determining matching according to the type of the slope when the target object is positioned on the slope.
In one embodiment, the determining whether the target object is located on the slope includes:
if the perpendicular projection line included angle gammaV is larger than or equal to 90 degrees, determining that the target object is positioned on the slope;
If the perpendicular projection line included angle gammaV is smaller than 90 degrees, calculating the product of the tangent value tan gammaV of the perpendicular projection line included angle gammaV and the mounting height hc of the camera, and determining that the target object is located on the slope if the product is larger than the distance L0 between the slope and the camera.
In one embodiment, the method further comprises:
In the case that the product is less than or equal to the distance L0 of the ramp from the camera, the product is determined as the first axial distance LTX of the target object relative to the camera.
In a second aspect, the application further provides a device for determining the distance of the target object based on the monitoring video. The device comprises:
the identification module is used for acquiring a monitoring video image of a target scene shot by the camera, identifying a target object contained in the monitoring video image and obtaining image position information of the target object in the monitoring video image; the target scene comprises a slope;
A first determining module for determining imaging position information of the target object in the photosensitive element according to the image position information and size information of the photosensitive element of the camera;
The second determining module is used for determining an included angle gammaV of a vertical plane projection line and an included angle gammaH of a horizontal plane projection line of the target object relative to the camera according to the field angle of view of the camera, the vertical azimuth angle and the imaging position information of the target object in the photosensitive element;
the third determining module is configured to determine a first axial distance LTX between the target object and the camera according to the perpendicular projection line included angle γV, the mounting height hc of the camera, the gradient δ of the slope, and the distance L0 between the slope and the camera, and determine a second axial distance LTY between the target object and the camera according to the first axial distance LTX and the horizontal projection line included angle γH.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
The method, the device, the computer equipment, the storage medium and the computer program product for determining the distance of the target object based on the monitoring video are characterized in that the image position information of the target object in the monitoring video image is identified, then the imaging position information of the target object in the photosensitive element is determined according to the image position information and the size information of the photosensitive element of the camera, then the vertical projection line included angle gammaV and the horizontal projection line included angle gammaH of the target object relative to the camera are determined based on the field angle, the vertical azimuth angle and the imaging position information of the camera, and accordingly the first axial distance LTX and the second axial distance LTY of the target object relative to the camera can be determined based on the vertical projection line included angle gammaV, the horizontal projection line included angle gammaH, the installation height hc of the camera, the gradient delta of the slope and the distance L0 of the slope to the camera. Therefore, the distance between the target object in the monitoring scene containing the slope and the monitoring camera can be determined based on the monitoring video, the method can be used for positioning the geographic position of the target object, and compared with a radar ranging method, the method is lower in cost and has the effects of being strong in instantaneity and wide in application range.
Drawings
FIG. 1 is a flow chart of a method for determining a target object distance based on surveillance video in one embodiment;
FIG. 2a is a schematic illustration of an application scenario in one example;
FIG. 2b is a schematic view of an imaging plane of a camera in one example;
Fig. 3 to 5 are schematic vertical projection views of a field of view of a camera, wherein fig. 3 is a scene in which an included angle gammaV of a projection line of an up slope and a vertical projection line of the camera is smaller than 90 °, fig. 4 is a scene in which an included angle gammaV of the projection line of the up slope and the vertical projection line of the camera is larger than 90 °, and fig. 5 is a scene of a down slope;
FIG. 6 is a schematic view of a horizontal projection of the camera field of view;
FIG. 7 is a block diagram of an apparatus for determining a target object distance based on surveillance video in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a method for determining a target object distance based on a surveillance video is provided, and the method can be applied to a server, a terminal and other computer devices. In this embodiment, the method comprises the steps of:
step 101, acquiring a monitoring video image of a target scene shot by a camera, and identifying a target object contained in the monitoring video image to obtain image position information of the target object in the monitoring video image.
The object scene includes a slope, which may be an ascending slope or a descending slope (relative to the installation position of the camera), and the distance L0 between the slope and the camera, the gradient delta of the slope, and the type of the slope may be calibrated in advance. It will be appreciated that the camera may be mounted on a flat ground at a distance from the ramp where L0 is greater than 0, or on the start of the ramp or on the ramp where L0 is equal to 0.
In the application scenario shown in fig. 2a, there is a target object T on the ascending slope in front of the shooting of the camera O, where T will be imaged in the imaging plane of the photosensitive element of the camera (for convenience of illustration, the imaging plane is equivalent to the front of the center point of the camera, and the length of the line OC of the center point O of the camera and the center point C of the imaging plane is equal to the focal length). Fig. 2b shows a front view of the imaging plane.
The camera for shooting the monitoring video is usually arranged at a certain height, namely the mounting height hc, and the field of view can cover the target scene to be shot. After the camera is installed and debugged, parameters such as the field angle of the camera (such as vertical field angle alphaV, horizontal field angle alphaH, and the like), vertical azimuth angle betaV, size information of a photosensitive element (total height and total width W of a photosensitive target surface, or unit sizes dH and dW of pixels and/or the number of pixels of each row and each column) and the like can be determined. The vertical azimuth angle βV of the camera refers to the angle between the central axis r of the camera (the central axis of the field of view, the optical axis of the lens) and the vertical line of the center of the camera (the line connecting the center point O of the camera and the ground projection O', which is perpendicular to the horizontal plane). If the camera shoots horizontally, namely the pitch angle of the camera is 0 degrees, the vertical azimuth angle is 90 degrees, if the camera shoots in a nodding mode, the vertical azimuth angle is smaller than 90 degrees, and if the camera shoots in a nodding mode, the vertical azimuth angle is larger than 90 degrees.
The camera can send the shot monitoring video image of the target scene to the server, and the server can identify the target object contained in the monitoring video image by adopting a pre-trained target detection model to obtain the image position information of the target object in the monitoring video image. For example, the pixel coordinates (uT,vT) of a pixel point such as the center point or the boundary center point of the detection frame of the target object in the image may be used as the image position information of the target object. The image position information may be used to determine imaging position information of the target object in the photosensitive element.
It will be appreciated that the camera may be a single lens camera or a multi-lens camera. In the case of a multi-lens camera, target recognition can be performed according to the monitoring video image shot by each lens, and the distances can be calculated respectively. The target recognition can be performed based on the fusion image corresponding to the multi-lens camera (the local images shot by the lenses are spliced and fused), a single-lens virtual camera corresponding to the fusion image is fitted, and the distance between the target object and the camera is calculated based on the related parameters (vertical view angle, horizontal view angle, vertical azimuth angle, size information of the photosensitive element and the like) of the virtual camera.
Step 102, determining imaging position information of the target object in the photosensitive element according to the image position information and the size information of the photosensitive element of the camera.
In implementation, after the server performs target recognition on the monitoring video image to obtain the position of the target object T in the image, the imaging position information of the target object T in the photosensitive element may be further determined according to the size information of the photosensitive element, where the imaging position information may reflect the physical distance between the imaging position of the target object in the photosensitive element and the boundary or bisector of the photosensitive element (or imaging plane).
For example, the server may calculate the distance (may be denoted as WT) between the imaging position T 'of the target object and the left boundary of the photosensitive element and the distance (may be denoted as HT) between the imaging position T' of the target object and the lower boundary of the photosensitive element based on the pixel coordinates (uT,vT) and the pixel unit height dH, the pixel unit width dW, and the total height H of the photosensitive element in the monitored video image, wherein WT=dW×uT,HT=H-dH×vT.
Since the monitoring video image may be a compressed image, in order to accurately calculate the imaging position of the target object on the photosensitive element, the pixel coordinates may be normalized. For example, if the pixel resolution of the image is m×n and the pixel coordinate of the target object is (uT,vT), the normalized coordinates (u'T,v′T),u′T=uT/m,v′T=vT/n) may be used as the image position information of the target object, and further, HT and WT of the target object may be calculated from the normalized coordinates and the total width W and total height H of the photosensitive element, where WT=W×u′T,HT=H×(1-v′T).
Step 103, determining a vertical projection line included angle gammaV and a horizontal projection line included angle gammaH of the target object relative to the camera according to the field angle of the camera, the vertical azimuth angle betaV and the imaging position information of the target object in the photosensitive element.
The plane formed by the central axis r of the camera and the central vertical line OO 'is perpendicular to the horizontal plane, and can be called a vertical plane, and the projection line of the vertical plane on the horizontal plane is consistent with the direction of the horizontal projection line r' of the central axis r. The vertical projection line angle γV refers to the angle between the projection line OTV of the spatial line between the camera center point O and the target object T on the vertical plane (i.e., the line between the camera center point O and the projection point TV of the target object T on the vertical plane) and the center vertical line OO'. The horizontal plane projection line included angle gammaH refers to an included angle between a projection line O 'TH of a spatial connection line of the camera center point O and the target object T on the horizontal plane (namely, a connection line of a ground projection O' of the camera center point and a projection point TH of the target object T on the horizontal plane where O 'is located) and a projection line r' of the central axis r on the horizontal plane.
In practice, the vertical projection line angle γV and the horizontal projection line angle γH may be calculated based on the vertical angle of view αV and/or the horizontal angle of view αH of the camera, the vertical azimuth angle βV, and the imaging position information of the target object in the photosensitive element, and the size information (e.g., total width W, total height H) of the photosensitive element.
As shown in fig. 2a, 3 and 6, wherein fig. 3 shows a schematic view of the camera field of view in fig. 2 in a vertical plane and fig. 6 shows a schematic view of the camera field of view in a horizontal plane. In the figure, O 'is a ground projection of the center point of the camera O, and the length of the line segment OO' is equal to the mounting height (distance from the ground) hc of the camera O. The central axis r intersects the imaging plane at point C (the center point of the photosensitive element), and the length of the line segment OC is equal to the focal length. Points a 'and B' are the center points of the upper and lower boundaries of the imaging plane, respectively, and points D 'and E' are the center points of the left and right boundaries of the imaging plane, respectively. The angle between ray OA 'and OB' is the vertical angle of view αV, and the angle between ray OD 'and OE' is the horizontal angle of view αH. Tv is the projection of the target object T on the vertical plane, and TH is the projection of the target object T on the horizontal plane. Point a in fig. 3 is a projection point of the intersection line of the lower boundary of the field of view of the camera with the ground (at a flat place) on a vertical plane, and point B is a projection point of the vertical plane of the intersection line of the upper boundary of the field of view of the camera with the ground (at a slope place). Points A ", B", D ", and E" in FIG. 6 are projections of the center points of the four boundaries of the imaging plane on the horizontal plane, respectively, point C 'is a projection of the center point of the imaging plane on the horizontal plane, T "is a projection of the imaging point T' of the target object in the imaging plane on the horizontal plane, and TV" is a horizontal projection of the imaging point T 'on the projection point TV' of the vertical plane.
As shown in the figure, it can be seen that:
If it isI.e. the target object is located in the lower half of the image (i.e. the imaging position of the target object is below the horizontal bisector of the imaging plane), then
If it isI.e. the target object is located in the upper half of the image (i.e. the imaging position of the target object is above the horizontal bisector of the imaging plane), then
If it isGammaV=βV.
The perpendicular projection line included angle gammaV of the target object T relative to the camera can also be calculated according to other formulas, for example, it can be deduced according to the cosine law:
Thus, it follows that:
The horizontal plane projection line included angle gammaH can be calculated according to the following formula:
wherein, when the vertical azimuth angle betaV =90° of the camera, it is known that
It will be appreciated that the imaging position information used in the above example is the distance WT between the imaging position T 'of the target object and the left boundary of the photosensitive element and the distance HT between the imaging position T' of the target object and the lower boundary of the photosensitive element, and in other examples, the perpendicular projection line angle γV and the horizontal projection line angle γH may be calculated using the distance between the imaging position T 'of the target object and the right boundary of the photosensitive element (imaging plane), the distance between the imaging position T' and the upper boundary, the distance between the imaging position T 'and the horizontal bisector of the imaging plane, or the distance between the imaging position T' and the vertical bisector of the imaging plane.
Step 104, determining a first axial distance LTX of the target object relative to the camera according to the perpendicular projection line included angle gammaV, the mounting height hc of the camera, the gradient delta of the slope and the distance L0 between the slope and the camera, and determining a second axial distance LTY of the target object relative to the camera according to the first axial distance LTX and the horizontal projection line included angle gammaH.
In an implementation, in order to facilitate positioning of the target object according to the distance between the target object and the camera, a coordinate system (e.g., a two-dimensional coordinate system and a three-dimensional coordinate system) may be established with the camera O as an origin, so as to determine the distance between the target object and each axis of the camera. A three-dimensional coordinate system is established as shown in fig. 2, in which the X-axis and the Y-axis are parallel to the horizontal plane, the Z-axis is perpendicular to the horizontal plane, and the X-axis coincides (is parallel to) the direction of the horizontal projection line r 'of the central axis r of the camera O (projection line on the horizontal plane), and the Y-axis is perpendicular to the horizontal projection line r' of the central axis r of the camera O. The plane formed by the X axis and the Z axis is the vertical plane of the camera or the plane parallel to the vertical plane. It will be appreciated that latitude and longitude information may be used as geographical location information when locating objects located on the ground (land level or slope), so that only two dimensional X-axis and Y-axis coordinate systems may be used when establishing the camera coordinate system.
In this embodiment, the first axial direction refers to the X-axis direction, that is, the direction of the horizontal projection line of the central axis of the camera, and the first axial distance LTX corresponds to the X-coordinate of the target object in the camera coordinate system. The second axis refers to the Y-axis direction, i.e., the direction of a horizontal line perpendicular to the horizontal projection line of the central axis of the camera (parallel to the horizontal plane and perpendicular to the horizontal projection line of the central axis), and the second axis distance LTY corresponds to the Y-coordinate of the target object in the camera coordinate system. The sign of the Y-coordinate of the target object may be determined from the position (left or right) of the imaging position of the target object relative to the imaging plane or relative to the perpendicular bisector of the surveillance video image.
The server may determine a matched first axial distance calculation formula according to the type of the slope (up slope or down slope), and further calculate the first axial distance LTX of the target object relative to the camera according to the matched first axial distance calculation formula.
Specifically, in the case where the ramp is an ascending ramp (as in the scenario shown in fig. 3 and 4), the first axial distance LTX may be calculated according to the following formula:
In the case where the slope is a downward slope (as in the scenario shown in fig. 5, where the perpendicular projection line angle γV of the target object may be only smaller than 90 °), the first axial distance LTX may be calculated according to the following formula:
The derivation of the first formula is as follows:
If the perpendicular projection line angle γV is smaller than 90 °, as shown in fig. 3, it can be seen that:
O′M=hc×tanγV
PM=O′M-L0=hc×tanγV-L0
∠MTVN=γV
TVN=MN×cot∠MTVN=(PM-PN)×cotγV
PN=TVN×cotδ=(PM-PN)×cotγV×cotδ
=PM×cotγV×cotδ-PN×cotγV×cotδ
If the perpendicular projection line included angle γV is greater than 90 °, as shown in fig. 4, it can be seen that:
∠JOTV=γV-90°
OG=L0+hc×cotδ
TVJ=GJ×tanδ=(OJ-OG)tanδ=OJ×tanδ-OG×tanδ
TVJ=OJ×tan∠JOTV
OJ×tanδ-OG×tanδ=OJ×tan∠JOTV
if the perpendicular projection line angle yV is equal to 90 deg., then cotyV =0,Therefore, the first formula can be suitable for the full scene of the ascending slope, and is suitable for the situation that the camera is in upward shooting, downward shooting or horizontal shooting, namely, the vertical azimuth angle of the camera is not limited, so that the formula is wide in application range and universal, and is simple, the calculation efficiency can be improved, and the calculation consumption can be reduced.
The second formula is derived as follows, as shown in fig. 5:
O′M=hc×tanγV
PM=O′M-L0=hc×tanγV-L0
∠MTVN=γV
TVN=MN×cot∠MTVN=(PN-PM)×cotγV=PN×cotγV-PM×cotγV
TVN=PN×tanδ
PN×cotγV-PM×cotγV=PN×tanδ
It will be appreciated that if the field of view of the camera does not cover the land between the ramp and the camera, i.e. the detected target object is only likely to be located on the ramp, the first axial distance LTX of the target object relative to the camera may be determined directly from the perpendicular projection line angle γV, the mounting height hc of the camera, the gradient δ of the ramp, and the distance L0 of the ramp from the camera.
In some embodiments, if the field of view of the camera also covers a level ground between the ramp and the camera, and the target object may also appear on that level ground, it may be determined first whether the target object is located on the ramp. For example, whether the target object is located on the slope may be determined according to the user's annotation, or whether the target object is located on the slope may be determined according to the perpendicular projection line angle γV, the mounting height hc of the camera, and the distance L0 between the slope and the camera. Specifically, if the perpendicular projection line included angle gammaV is greater than or equal to 90 °, the target object can be determined to be located on the slope; If the perpendicular projection line included angle gammaV is less than 90, The product of the tangent tan yV of the perpendicular projection line angle yV and the mounting height hc of the camera (hc×tanγV) can be calculated, And comparing the product with the distance L0 between the slope and the camera, and judging whether the target object is positioned on the slope according to the comparison result. If the product hc×tanγV is greater than the distance L0, then it may be determined that the target object is located on a slope, and the first axial distance LTX may be calculated using the first or second formula described above. if the product hc×tanγV is less than or equal to the distance L0 between the ramp and the camera, i.e., the target object is located on the level ground between the camera and the ramp, then the product hc×tanγV may be directly determined as the first axial distance LTX of the target object relative to the camera.
Then, the server may calculate the second axial distance LTY of the target object relative to the camera according to the first axial distance LTX and the horizontal plane projection line angle γH, where the calculation formula may be as follows:
LTY=LTX×tanγH
As shown in fig. 6, it can be seen that: lTY=THN=O′N×tanγH=LTX×tanγH.
Thus, a first axial distance LTX and a second axial distance LTY of the target object T with respect to the camera can be obtained, whereby an X-coordinate and a Y-coordinate (XT,YT) of the target object T in the camera coordinate system can be obtained, wherein XT=LTX,YT=LTY (when the imaging position of the target object T is in the right half of the image) or YT=-LTY (when the imaging position of the target object T is in the left half of the image). And further, the coordinates (XT,YT) of the target object T under the coordinate system of the camera can be converted into longitude and latitude information according to the geographic position (such as longitude and latitude information) of the camera, so that the geographic position of the target object is obtained. The conversion process may employ existing methods, and will not be described in detail herein.
Optionally, the vertical distance LTZ of the target object T relative to the camera O may be calculated according to the first axial distance LTX of the target object T relative to the camera, the mounting height of the camera, and the vertical projection line included angle γV, so as to obtain the Z coordinate ZT. Specifically, the vertical distance can be calculated using the following formula: lTZ=|LTX×cotγV. It can be seen that when γV =90°, ZT=LTZ =0, it means that the target object T is at the same height as the camera O; when γV <90 °, ZT=LTZ >0, indicating that the position of the target object T is lower than the camera; when γV >90 °, ZT=-LTZ <0 indicates that the position of the target object T is higher than the camera. Thus, the coordinates (XT,YT,ZT) of the target object T in the camera coordinate system can be obtained, so that the geospatial position of the target object can be positioned more accurately, and the use of the linear distance between the target object T and the camera and the like can be calculated based on the coordinates.
According to the method for determining the distance of the target object based on the monitoring video, the distance and the position of the target object in the monitoring scene including the slope relative to the camera can be determined based on the monitoring video through simple calculation, and further the geographic position of the target object can be determined based on the geographic position of the camera and each axial distance of the target object relative to the camera, so that the geographic positioning of the target object is realized.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a device for determining the target object distance based on the monitoring video, which is used for realizing the method for determining the target object distance based on the monitoring video. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so specific limitations in one or more embodiments of the apparatus for determining a target object distance based on a surveillance video provided below may be referred to above as limitations of the method for determining a target object distance based on a surveillance video, and will not be repeated herein.
In one embodiment, as shown in fig. 7, there is provided an apparatus 700 for determining a target object distance based on a surveillance video, including: an identification module 701, a first determination module 702, a second determination module 703 and a third determination module 704, wherein:
The identifying module 701 is configured to obtain a surveillance video image of a target scene captured by a camera, identify a target object included in the surveillance video image, and obtain image position information of the target object in the surveillance video image; the target scene includes a ramp.
A first determining module 702 is configured to determine imaging position information of the target object in the photosensitive element according to the image position information and size information of the photosensitive element of the camera.
A second determining module 703, configured to determine, according to a field angle of the camera, a vertical azimuth angle, and imaging position information of the target object in the photosensitive element, a vertical projection line angle γV and a horizontal projection line angle γH of the target object relative to the camera.
The third determining module 704 is configured to determine a first axial distance LTX between the target object and the camera according to the perpendicular projection line included angle γV, the mounting height hc of the camera, the gradient δ of the slope, and the distance L0 between the slope and the camera, and determine a second axial distance LTY between the target object and the camera according to the first axial distance LTX and the horizontal projection line included angle γH.
In one embodiment, the third determining module 704 is specifically configured to: determining a matched first axial distance determination strategy according to the type of the slope; the type of the slope comprises an ascending slope and a descending slope; and determining a first axial distance LTX of the target object relative to the camera according to the perpendicular projection line included angle gammaV, the mounting height hc of the camera, the gradient delta of the slope, the distance L0 between the slope and the camera and the matched first axial distance determining strategy.
In one embodiment, the first axial distance determination strategy matched to the up ramp is to calculate a first axial distance LTX of the target object relative to the camera according to a first formula;
The first formula is:
In one embodiment, the first axial distance determination strategy matched to the downward slope is to calculate a first axial distance LTX of the target object relative to the camera according to a second formula;
The second formula is:
in one embodiment, the third determination module 704 is further configured to: judging whether the target object is positioned on the slope or not; and executing the first axial distance determination strategy step of determining matching according to the type of the slope when the target object is positioned on the slope.
In one embodiment, the third determination module 704 is further configured to: if the perpendicular projection line included angle gammaV is larger than or equal to 90 degrees, determining that the target object is positioned on the slope; if the perpendicular projection line included angle gammaV is smaller than 90 degrees, calculating the product of the tangent value tan gammaV of the perpendicular projection line included angle gammaV and the mounting height hc of the camera, and determining that the target object is located on the slope if the product is larger than the distance L0 between the slope and the camera.
In one embodiment, the apparatus further comprises a fourth determining module for determining the product as a first axial distance LTX of the target object relative to the camera if the product is less than or equal to a distance L0 of the ramp from the camera.
The above-mentioned means for determining the distance of the target object based on the surveillance video may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data required or generated for executing the method for determining the distance of the target object based on the monitoring video. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of determining a target object distance based on surveillance video.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

CN202410204052.4A2024-02-232024-02-23Method and device for determining target object distance based on monitoring videoActiveCN118200479B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410204052.4ACN118200479B (en)2024-02-232024-02-23Method and device for determining target object distance based on monitoring video

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410204052.4ACN118200479B (en)2024-02-232024-02-23Method and device for determining target object distance based on monitoring video

Publications (2)

Publication NumberPublication Date
CN118200479A CN118200479A (en)2024-06-14
CN118200479Btrue CN118200479B (en)2024-10-22

Family

ID=91400818

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410204052.4AActiveCN118200479B (en)2024-02-232024-02-23Method and device for determining target object distance based on monitoring video

Country Status (1)

CountryLink
CN (1)CN118200479B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108639108A (en)*2018-05-122018-10-12西北铁道电子股份有限公司A kind of locomotive shunting security protection system
CN110288654A (en)*2019-04-282019-09-27浙江省自然资源监测中心 A method for geometric measurement of a single image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2007208745A (en)*2006-02-022007-08-16Victor Co Of Japan LtdMonitoring device and monitoring method
KR100811832B1 (en)*2006-05-302008-03-10주식회사 에스원 Method and device for measuring real size of image using single camera
KR102046043B1 (en)*2013-08-212019-11-18한화테크윈 주식회사Monitoring apparatus and system using 3 dimensional information of images, and method thereof
KR101666466B1 (en)*2015-03-262016-10-14주식회사 네오카텍Marine risk management system and marine risk management method using marine object distance measuring system with monocular camera
US11669092B2 (en)*2019-08-292023-06-06Rockwell Automation Technologies, Inc.Time of flight system and method for safety-rated collision avoidance
JP7238722B2 (en)*2019-10-112023-03-14トヨタ自動車株式会社 vehicle parking assist device
JP2023079012A (en)*2021-11-262023-06-07日立Astemo株式会社 External recognition device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108639108A (en)*2018-05-122018-10-12西北铁道电子股份有限公司A kind of locomotive shunting security protection system
CN110288654A (en)*2019-04-282019-09-27浙江省自然资源监测中心 A method for geometric measurement of a single image

Also Published As

Publication numberPublication date
CN118200479A (en)2024-06-14

Similar Documents

PublicationPublication DateTitle
US9177384B2 (en)Sequential rolling bundle adjustment
CN111652937B (en)Vehicle-mounted camera calibration method and device
CN111295667B (en)Method for stereo matching of images and auxiliary driving device
CN114187344B (en) Map construction method, device and equipment
CN112270719A (en)Camera calibration method, device and system
CN112837207B (en)Panoramic depth measurement method, four-eye fisheye camera and binocular fisheye camera
CN108444452B (en)Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device
CN119228910A (en) Method, device, equipment, medium and program product for determining camera posture of vehicle
CN118200479B (en)Method and device for determining target object distance based on monitoring video
CN114549650B (en) Camera calibration method, device, electronic device and readable storage medium
CN210986289U (en)Four-eye fisheye camera and binocular fisheye camera
US20070280555A1 (en)Image registration based on concentric image partitions
GB2557212A (en)Methods and apparatuses for determining positions of multi-directional image capture apparatuses
CN117523003A (en)Camera calibration method and device in multi-camera system and electronic equipment
CN118151145A (en)Target movement speed monitoring method, device, computer equipment and storage medium
JP7493793B2 (en) Image orientation method, image orientation device, image orientation system, and image orientation program
Neves et al.A Master‐Slave Calibration Algorithm with Fish‐Eye Correction
Novikov et al.Vehicle geolocalization from drone imagery
US20200005832A1 (en)Method for calculating position coordinates and electronic device
CN117095342B (en)Drop zone detection method, drop zone detection device, computer equipment and storage medium
CN119169255B (en) Target tracking and display method and device for billion-pixel computational imaging system
CN116645406B (en)Depth map generation method and device, computer readable storage medium and electronic equipment
CN112197747B (en) Method and device for assisting target detection using wireless positioning
CN119540067A (en) Image fusion method and device
KR20250116519A (en)Electronic system and method of operating the same for estimating three-dimensional space coordinates of target

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp