Disclosure of Invention
The invention provides an object positioning method and device and electronic equipment, which are used for accurately positioning the position of a throwing point of a high-altitude throwing object on the basis of not invading personal privacy.
According to a first aspect of an embodiment of the present application, there is provided an object positioning method, including:
Identifying a first position of a target object in a first picture shot by a first image pickup device and a second position of the target object in a second picture shot by a second image pickup device, wherein the first image pickup device and the second image pickup device are positioned at the same horizontal installation height and the shooting direction is parallel to a wall surface, and the first image pickup device and the second image pickup device are positioned at two corner positions at the bottom of the wall surface at a preset horizontal distance;
Determining a first included angle representing the relative positions of the target object, the first camera and the horizontal plane based on the first position, and determining a second included angle representing the relative positions of the target object, the second camera and the horizontal plane based on the second position;
And determining the position of the throwing point of the target object according to the first included angle, the second included angle and the preset horizontal distance between the first image pickup device and the second image pickup device.
Optionally, determining, based on the first position, a first angle representing a relative position of the target object and the first image capturing device and a horizontal plane includes:
Determining the first included angle according to the field angle, the shooting angle and the first position of the first camera device;
Determining a second included angle representing the relative positions of the target object, the second image capturing device and the horizontal plane according to the second position, wherein the second included angle comprises:
And determining the second included angle according to the field angle, the shooting angle and the second position of the second image pickup device.
Optionally, determining the position of the ejection point of the target object according to the first included angle, the second included angle, and a horizontal distance between the first image capturing device and the second image capturing device includes:
Determining a first ratio of the height of the target object to a first horizontal distance between the target object and the first camera device according to the first included angle, and determining a second ratio of the height to a second horizontal distance between the target object and the second camera device according to the second included angle;
determining part or all of the first horizontal distance and the second horizontal distance and the height according to the first ratio, the second ratio and the preset horizontal distance, wherein the preset horizontal distance is the sum of the first horizontal distance and the second horizontal distance;
Determining a throwing point position of the target object according to the height and the first horizontal distance; or determining the throwing point position of the target object according to the height and the second horizontal distance.
Optionally, after determining the height of the target object, the method includes:
determining the time required by the object to land according to the vertical falling speed of the object parallel to the wall surface and the height of the object;
Determining a first horizontal distance when the target object lands according to the transverse speed of the target object parallel to the wall surface, the current first horizontal distance and the time required by the landing of the target object; and/or determining a second horizontal distance when the target object lands according to the transverse speed of the target object parallel to the wall surface, the current second horizontal distance and the time required by the landing of the target object;
Determining the vertical distance from the target object to the wall surface when the target object lands according to the horizontal speed of the target object vertical to the wall surface and the time required by the target object to land;
And carrying out safety precaution based on the landing position of the target object, wherein the landing position is determined according to part or all of the first horizontal distance and the second horizontal distance and the vertical distance.
Optionally, determining the vertical drop velocity comprises:
Continuously positioning the target object based on at least two continuous first pictures and at least two continuous second pictures, and determining a height difference passing between any two continuous positioning of the target object;
Determining the vertical falling speed according to the height difference and the time interval of any two continuous positioning;
Determining the lateral velocity comprises:
continuously positioning the target object in the first picture and the second picture, and determining a horizontal distance difference passing between any two continuous positioning of the target object;
Determining the transverse velocity according to the horizontal distance difference and the time interval of any two continuous positioning;
Determining the horizontal velocity according to the following manner, including:
Continuously positioning the target object in the first picture and the second picture, and determining the vertical distance difference which passes between any two continuous positioning of the target object and is vertical to the wall surface;
and determining the horizontal speed according to the vertical distance difference and the time interval of any two continuous positioning.
According to a second aspect of embodiments of the present application, there is provided an object positioning device, the device comprising:
The device comprises an identification module, a first camera and a second camera, wherein the identification module is used for identifying a first position of a target object in a first picture shot by the first camera and a second position of the target object in a second picture shot by the second camera, the first camera and the second camera are positioned at the same horizontal installation height and the shooting direction is parallel to a wall surface, and the first camera and the second camera are positioned at two corner positions at the bottom of the wall surface at a preset horizontal distance;
The included angle determining module is used for determining a first included angle representing the relative positions of the target object, the first camera equipment and the horizontal plane based on the first position, and determining a second included angle representing the relative positions of the target object, the second camera equipment and the horizontal plane according to the second position;
and the position determining module is used for determining the position of the throwing point of the target object according to the first included angle, the second included angle and the preset horizontal distance between the first image capturing device and the second image capturing device.
According to a third aspect of an embodiment of the present application, there is provided an electronic apparatus for object positioning, including: a memory, a processor;
wherein the memory is used for storing programs;
The processor is configured to execute the program in the memory to implement the method provided in the first aspect above.
According to a fourth aspect of embodiments of the present application, there is provided a chip coupled to a memory in a user equipment, such that the chip, when running, invokes program instructions stored in the memory, implementing the above aspects of embodiments of the present application and any possible related methods related to the aspects.
According to a fifth aspect of embodiments of the present application there is provided a computer readable storage medium storing program instructions which, when run on a computer, cause the computer to perform the aspects of embodiments of the present application described above and any one of the possible methods to which the aspects relate.
According to a sixth aspect of embodiments of the present application, there is provided a computer program product which, when run on an electronic device, causes the electronic device to perform any one of the possible ways of implementing the above aspects of embodiments of the present application and related thereto.
In addition, the technical effects caused by any implementation manner of the second aspect to the sixth aspect may refer to the technical effects caused by different implementation manners of the first aspect, which are not described herein.
The object positioning method and device and the electronic equipment provided by the invention have the following beneficial effects:
According to the object positioning method and device and the electronic device, the first included angle of the relative positions of the target object and the first image capturing device and the horizontal plane can be determined based on the positions of the same target object in the images captured by the two image capturing devices, the second included angle of the relative positions of the target object and the second image capturing device and the horizontal plane can be determined, the position of the throwing point of the target object can be determined according to the horizontal distance between the two image capturing devices, and personal privacy can be protected.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the embodiment of the invention, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The application scenario described in the embodiment of the present invention is for more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation on the technical solution provided by the embodiment of the present invention, and as a person of ordinary skill in the art can know that the technical solution provided by the embodiment of the present invention is applicable to similar technical problems as the new application scenario appears. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the high-altitude parabolic phenomenon, the thrown object can obtain higher energy when descending, and if the thrown object is hit by pedestrians, objects and the like, higher damage is brought, so that the high-altitude parabolic is a behavior which needs to be monitored in a focused way at present, but because the implementation place of the non-civilized behavior is mostly a high-altitude floor, few witness persons exist, the parabolic time is short, the monitoring process is difficult, and the problems of higher cost and easiness in invading personal privacy exist in the related technology.
In view of the above problems, the embodiment of the present application provides a method for positioning an object, where the method for positioning an object provided by the present application can determine an included angle between an object and two image capturing devices and between the object and a horizontal plane by using positions of the same object in a picture captured by two image capturing devices, and determine a position of a throwing point of the object according to the determined included angle between the object and the two image capturing devices and between the object and the horizontal plane and a distance between the object and the two image capturing devices.
Fig. 1 shows an object positioning method according to an embodiment of the present application, including:
Step S101, recognizing a first position of a target object in a first picture shot by a first image pickup device and a second position of the target object in a second picture shot by a second image pickup device, wherein the first image pickup device and the second image pickup device are positioned at the same horizontal installation height and the shooting direction is parallel to a wall surface, and the first image pickup device and the second image pickup device are positioned at two corner positions at the bottom of the wall surface at a preset horizontal distance;
As described above, embodiments of the present application may be, but are not limited to, detecting the position of the ejection point of the aerial parabolic object, and the target object may be, but is not limited to, an aerial ejected object. Since the paraboloid in the high air is a moving object, and the landing time of the object needs to be determined according to the falling speed of the object, the first image capturing device and the second image capturing device in the embodiment of the present application may be image capturing devices with motion detection, specifically may be network image capturing apparatuses, as an alternative implementation manner, image capturing devices without motion detection may also be used, and then the images captured by the first image capturing device and the second image capturing device are analyzed and processed by the image capturing device with a motion detection function, specifically may be network image capturing devices, and the analysis and processing process will be described in the following embodiments.
In order to enable the object positioning result to be more accurate, in the embodiment of the present application, the first image capturing device and the second image capturing device need to be at the same horizontal installation height, and the shooting directions of the first image capturing device and the second image capturing device are parallel to the wall surface, that is, the shooting angles of the first image capturing device and the second image capturing device are parallel to the wall surface, meanwhile, in order to enable the first image capturing device and the second image capturing device to shoot a target object thrown out by any wall surface corresponding to each position of the whole building, in the embodiment of the present application, two corner positions of the first image capturing device and the second image capturing device at the bottom of any wall surface are set, as shown in fig. 2, meanwhile, the heights of the first image capturing device and the second image capturing device can be set according to the actual requirements of a person skilled in the art, and on the basis of being capable of shooting the target object, the maintenance conditions of the devices need to be considered, therefore, in the embodiment of the present application can set the first image capturing device and the second image capturing device at a preset position apart from the ground, and it is to be explained that, since the corner positions of the first image capturing device and the second image capturing device are located at the corner positions of the wall surface, the first image capturing device and the second image capturing device is not limited to the horizontal image capturing device.
Alternatively, in order to be able to shoot objects at various positions of the wall surface, the first image capturing device and the second image capturing device may be, but not limited to, image capturing devices that can be turned by a camera, and the turning range needs to be greater than ninety degrees, or the angles of view of the first image capturing device and the second image capturing device are greater than ninety degrees, so that the accuracy of the positioning result, the landing time, and the landing position of the target object is high, the first image capturing device and the second image capturing device in the embodiment of the present application may be, but not limited to, image capturing devices with high resolution.
Step S102, determining a first included angle representing the relative positions of the target object, the first image pickup device and the horizontal plane based on the first position, and determining a second included angle representing the relative positions of the target object, the second image pickup device and the horizontal plane based on the second position;
Wherein, confirm the first contained angle of the relative position of target object and first camera equipment and horizontal plane through the following mode:
According to the view angle, the shooting angle and the first position of the first image capturing device, it is to be noted that, in the embodiment of the present application, the position of the shooting point of the target object includes the height of the shooting point of the target object and the horizontal distance from the first image capturing device or the horizontal distance from the second image capturing device, and the specific position of the target object is determined according to the height of the shooting point of the target object and the horizontal distance from the first image capturing device, or the specific position of the target object is determined according to the height of the shooting point of the target object and the horizontal distance from the second image capturing device.
Since the distance between the target object and both sides of the screen shot by the first image capturing apparatus may be characterized as the distance between the target object and the wall surface, and the distance between the target object and the upper side or the lower side of the screen may be characterized as the height of the target object, the first position may be determined by, but not limited to, the distance between the target object and the lower side of the screen shot by the first image capturing apparatus, and the distance between the target object and the left side of the screen shot by the first image capturing apparatus.
The embodiment of the present application may, but is not limited to, determine the angle of view of the first image capturing apparatus by:
1) Calculating the field angle of the lens according to the zoom position parameter and the imaging height parameter corresponding to the zoom position;
Specifically, the angle of view is also called a field of view in optical engineering, the size of the angle of view determines the field of view of the optical instrument, the angle of view can be represented by FOV, and the relationship between the angle of view and the focal length is as follows:
image height=efl×tan (half FOV);
the EFL is a focal length, the FOV is a field of view, therefore, the focal length of the first image pickup device can be obtained through the zoom position parameters, and the corresponding field angle of the current lens can be calculated by utilizing the formula according to the focal length and the imaging height parameters.
2) And determining the angle of view corresponding to the zoom position of the first image capturing device according to the preset mapping relation between the zoom position and the angle of view.
Specifically, the view angle of the first image capturing device can be correspondingly determined by searching a table according to the mapping relation between the preset zoom position and the view angle in the table and finding the zoom position corresponding to the first image capturing device in the table;
Alternatively, the mapping relationship between the zoom position and the angle of view of the lenses with different specifications is slightly different, and a person skilled in the art can select the mapping relationship corresponding to the first image capturing apparatus according to the actual requirement, which is not limited herein.
In the embodiment of the application, firstly, according to the field angle and the first position of the first image capturing device, determining the angle of the target object relative to the main optical axis of the first image capturing device, and then determining a first included angle by summing the angle of the target object relative to the main optical axis of the first image capturing device and the shooting angle of the first image capturing device, for example, if the field angle of the first image capturing device is 60 degrees, the shooting angle of the first image capturing device is 45 degrees, and if the target object is located at the center position of the shooting picture of the first image capturing device, the angle of the target object relative to the main optical axis of the first image capturing device is zero, and the first included angle is 45 degrees; or for example, the field angle of the first image capturing apparatus is 90 degrees, the capturing angle of the first image capturing apparatus is 0 degrees, that is, the capturing angle of the first image capturing apparatus is parallel to the horizontal plane, and if the target object is a position in the picture captured by the first image capturing apparatus that is a position intermediate between the upper side and the center horizontal line, the angle of the target object and the main optical axis of the first image capturing apparatus is 45 degrees, and the first included angle is 45 degrees.
Similarly, the second position may be determined by, but not limited to, a distance between the target object and the lower side of the image captured by the second image capturing device, and a distance between the target object and the left side of the image captured by the second image capturing device, where in the embodiment of the present application, the manner of determining the second included angle representing the relative positions of the target object and the second image capturing device and the horizontal plane according to the second position is as follows:
The second included angle is determined according to the field angle, the shooting angle and the second position of the second image capturing device, wherein the specific implementation manner of determining the first included angle is described in the above description, and is not repeated herein.
Step S103, determining the throwing point position of the target object according to the first included angle, the second included angle and the preset horizontal distance between the first image capturing device and the second image capturing device.
In the embodiment of the present application, according to the first included angle, the second included angle, and the preset horizontal distance between the first image capturing device and the second image capturing device, the height of the object throwing point and the horizontal distance between the object and the first image capturing device, and the horizontal distance between the object and the second image capturing device can be determined, so as to determine the throwing point position of the object, where a specific implementation manner of determining the object throwing point position according to the height of the object and the first horizontal distance is shown in fig. 3:
step S301, determining a first ratio of the height of the target object to a first horizontal distance between the target object and the first camera device according to a first included angle, and determining a second ratio of the height to a second horizontal distance between the target object and the second camera device according to a second included angle;
step S302, determining a first horizontal distance and a height according to a first ratio, a second ratio and a preset horizontal distance, wherein the preset horizontal distance is the sum of the first horizontal distance and the second horizontal distance;
step S303, determining the throwing point position of the target object according to the height of the target object and the first horizontal distance.
A specific embodiment of determining the location of the target object's ejection point based on the target object's height and the second horizontal distance is shown in fig. 4:
Step S401, determining a first ratio of the height of the target object to a first horizontal distance between the target object and the first camera device according to a first included angle, and determining a second ratio of the height to a second horizontal distance between the target object and the second camera device according to a second included angle;
Step S402, determining a second horizontal distance and a height according to a first ratio, a second ratio and a preset horizontal distance, wherein the preset horizontal distance is the sum of the first horizontal distance and the second horizontal distance;
step S403, determining the position of the ejection point of the target object according to the height of the target object and the second horizontal distance.
The method for determining the height of the throwing point of the target object, the horizontal distance between the target object and the first image capturing device, the horizontal distance between the target object and the second image capturing device and further determining the position of the throwing point of the target object according to the first included angle, the second included angle and the preset horizontal distance between the first image capturing device and the second image capturing device is described below with reference to specific embodiments;
As shown in fig. 2, a triangle is respectively constructed with a target object, a first image capturing device and a second image capturing device as vertices, and then the side lengths of the triangle are respectively the connection line of the target object and the first image capturing device, the connection line of the target object and the second image capturing device and the connection line of the first image capturing device and the second image capturing device, according to the triangle tangent theorem, the tangent value of the first included angle is determined as a first ratio of the height of the target object to the first horizontal distance, the tangent value of the second included angle is determined as a second ratio of the height of the target object to the second horizontal distance, and since the sum of the first horizontal distance and the second horizontal distance is the horizontal distance between the first image capturing device and the second image capturing device, the second horizontal distance is represented by the difference value of the horizontal distance between the first image capturing device and the second image capturing device, and the specific formula is as follows:
Wherein a is a first included angle, β is a second included angle, h is the height of the target object, x is a first horizontal distance, W is the distance between the first image capturing apparatus and the second image capturing apparatus, and the height and the first horizontal distance of the target object can be obtained according to the formula (1) and the formula (2), as follows:
In the embodiment of the application, the falling time and the landing position of the target object can be determined according to the time, the height and the horizontal distance in the falling process of the target object so as to perform safety precaution, wherein the target object moves in a parabolic form when falling, the speed of the target object can be decomposed into a speed parallel to the wall surface and a speed perpendicular to the wall surface according to the direction, the speed parallel to the wall surface can be divided into a transverse speed parallel to the wall surface and a vertical falling speed parallel to the wall surface, and the horizontal speed perpendicular to the wall surface and the transverse speed parallel to the wall surface are unchanged during the falling process of the target object due to the fact that the object only receives downward gravity, the vertical falling speed parallel to the wall surface is continuously increased, and particularly, the target object moves in a free falling manner when being vertical parallel to the wall surface, and the free falling formula is as follows:
wherein h is the height of the target object, Vh is the vertical falling speed of the target object vertically parallel to the wall surface, g is the gravitational acceleration, t is the remaining landing time, and the remaining landing time can be determined according to formula (4) as follows:
In the embodiment of the application, the target object can be continuously positioned based on at least two continuous first pictures and at least two continuous second pictures, and the height difference passing between any two continuous positioning of the target object can be determined; according to the height difference and any time interval between two consecutive positions, the vertical falling speed Vh of the target object vertically parallel to the wall surface can be determined, as an alternative implementation manner, the quotient of the height difference passing between any two consecutive positions of the target object and the time interval between any two consecutive positions, namely, the average vertical falling speed between the two consecutive positions is taken as the vertical falling speed corresponding to the time of the two positions, and it is to be noted that the time interval between the two positions can be freely set by a person skilled in the art, but the smaller the time interval between the two positions is, the more accurate the obtained vertical falling speed is.
According to the vertical falling speed and the height of the target object of the object, the remaining landing time of the object can be obtained, and at this time, if the height of the target object is the height corresponding to the throwing point, the vertical falling speed is the initial speed of the target object, namely the speed of the target object when being thrown, and the whole landing time of the target object can be obtained.
Similarly, continuously positioning the target object in the first picture and the second picture, and determining the horizontal distance difference passing between any two continuous positioning of the target object; determining a transverse speed according to the horizontal distance difference and any time interval between two continuous positioning, specifically, taking the quotient of the horizontal distance difference and the time interval between two continuous positioning, namely the average transverse speed between two continuous positioning times, as the transverse speed corresponding to the two positioning times;
similarly, continuously positioning the target object in the first picture and the second picture, and determining the vertical distance difference which passes between any two continuous positioning of the target object and is vertical to the wall surface; the horizontal speed is determined according to the vertical distance difference and any time interval between two continuous positioning, specifically, the quotient of the vertical distance difference and the time interval between two continuous positioning, namely, the average horizontal speed between two continuous positioning times is used as the horizontal speed corresponding to the two positioning times.
The horizontal distance that the target object passes through in the landing process can be obtained according to the horizontal speed that the target object is parallel to the wall surface and the landing time, at this time, the horizontal distance that the target object passes through in the landing process can be obtained according to the initial horizontal distance that the target object passes through in the landing process from the first image capturing device and the horizontal distance that the target object passes through in the landing process from the first image capturing device or the second image capturing device, or the horizontal distance that the target object passes through in the landing process from the initial horizontal distance that the target object passes through in the landing process to the horizontal distance that the target object passes through in the first image capturing device or the second image capturing device, and the horizontal distance that the target object moves in the landing process to the left or right is required to be increased or decreased based on the first horizontal distance or the second horizontal distance, which is required to be correspondingly needed.
In the embodiment of the application, safety pre-warning is performed according to the vertical distance from the wall surface when the target object lands and the horizontal distance from the first image pickup device or the second image pickup device, specifically, according to the product of the landing time and the horizontal speed of the target object as the vertical distance from the wall surface when the target object lands, as an optional implementation manner, a certain distance can be respectively increased forwards and backwards on the basis of the horizontal distance from the first image pickup device or the second image pickup device when the target object lands, and a certain distance is increased on the basis of the obtained vertical distance from the wall surface, so that a safety pre-warning area is obtained.
In order to increase the early warning effect, in the embodiment of the application, the early warning level is increased, as an optional implementation manner, when the vertical speed is high or the remaining landing time is short, the early warning level can be increased, and a specific early warning process can include, but is not limited to, warning by a voice prompt mode or using strip light in an early warning area.
The object positioning method according to the present invention is described above, and a device for performing the object positioning will be described below.
Referring to fig. 5, an object positioning device provided in an embodiment of the present invention includes:
An identifying module 501, configured to identify a first position of a target object in a first image captured by a first image capturing device, and a second position of the target object in a second image captured by a second image capturing device, where the first image capturing device and the second image capturing device are at a same horizontal installation height and a capturing direction is parallel to a wall surface, and the first image capturing device and the second image capturing device are located at two corner positions at a bottom of the wall surface at a preset horizontal distance;
An included angle determining module 502, configured to determine, based on the first position, a first included angle that represents a relative position of the target object to the first image capturing device and a horizontal plane, and determine, according to the second position, a second included angle that represents a relative position of the target object to the second image capturing device and the horizontal plane;
A position determining module 503, configured to determine a position of a throwing point of the target object according to the first included angle, the second included angle, and a preset horizontal distance between the first image capturing device and the second image capturing device.
Optionally, the included angle determining module 502 is configured to determine, based on the first position, a first included angle that indicates a relative position of the target object, the first image capturing device, and a horizontal plane, and specifically is configured to:
Determining the first included angle according to the field angle, the shooting angle and the first position of the first camera device;
Determining a second included angle representing the relative positions of the target object, the second image capturing device and the horizontal plane according to the second position, wherein the second included angle comprises:
And determining the second included angle according to the field angle, the shooting angle and the second position of the second image pickup device.
Optionally, the position determining module 503 is configured to determine a position of a throwing point of the target object according to the first included angle, the second included angle, and a horizontal distance between the first image capturing device and the second image capturing device, and specifically is configured to:
Determining a first ratio of the height of the target object to a first horizontal distance between the target object and the first camera device according to the first included angle, and determining a second ratio of the height to a second horizontal distance between the target object and the second camera device according to the second included angle;
determining part or all of the first horizontal distance and the second horizontal distance and the height according to the first ratio, the second ratio and the preset horizontal distance, wherein the preset horizontal distance is the sum of the first horizontal distance and the second horizontal distance;
Determining a throwing point position of the target object according to the height and the first horizontal distance; or determining the throwing point position of the target object according to the height and the second horizontal distance.
Optionally, after the location determining module 503 is configured to determine the height of the target object, it is specifically configured to:
determining the time required by the object to land according to the vertical falling speed of the object parallel to the wall surface and the height of the object;
Determining a first horizontal distance when the target object lands according to the transverse speed of the target object parallel to the wall surface, the current first horizontal distance and the time required by the landing of the target object; and/or determining a second horizontal distance when the target object lands according to the transverse speed of the target object parallel to the wall surface, the current second horizontal distance and the time required by the landing of the target object;
Determining the vertical distance from the target object to the wall surface when the target object lands according to the horizontal speed of the target object vertical to the wall surface and the time required by the target object to land;
And carrying out safety precaution based on the landing position of the target object, wherein the landing position is determined according to part or all of the first horizontal distance and the second horizontal distance and the vertical distance.
Optionally, the position determining module 503 is configured to determine the vertical velocity by:
Continuously positioning the target object based on at least two continuous first pictures and at least two continuous second pictures, and determining a height difference passing between any two continuous positioning of the target object;
Determining the vertical falling speed according to the height difference and the time interval of any two continuous positioning;
Determining the lateral velocity comprises:
continuously positioning the target object in the first picture and the second picture, and determining a horizontal distance difference passing between any two continuous positioning of the target object;
Determining the transverse velocity according to the horizontal distance difference and the time interval of any two continuous positioning;
Determining the horizontal velocity according to the following manner, including:
Continuously positioning the target object in the first picture and the second picture, and determining the vertical distance difference which passes between any two continuous positioning of the target object and is vertical to the wall surface;
and determining the horizontal speed according to the vertical distance difference and the time interval of any two continuous positioning.
An object positioning device in the embodiment of the present application is described above in terms of a modularized functional entity, and an electronic apparatus for object positioning in the embodiment of the present application is described below in terms of hardware processing.
Referring to fig. 6, an electronic device for object positioning according to an embodiment of the present application includes:
at least one processor 601 and at least one memory 602, and a bus system 609;
Wherein the memory stores program code that, when executed by the processor, causes the processor to perform the following:
Identifying a first position of a target object in a first picture shot by a first image pickup device and a second position of the target object in a second picture shot by a second image pickup device, wherein the first image pickup device and the second image pickup device are positioned at the same horizontal installation height and the shooting direction is parallel to a wall surface, and the first image pickup device and the second image pickup device are positioned at two corner positions at the bottom of the wall surface at a preset horizontal distance;
Determining a first included angle representing the relative positions of the target object, the first camera and the horizontal plane based on the first position, and determining a second included angle representing the relative positions of the target object, the second camera and the horizontal plane based on the second position;
And determining the position of the throwing point of the target object according to the first included angle, the second included angle and the preset horizontal distance between the first image pickup device and the second image pickup device.
Fig. 6 is a schematic diagram of an electronic device for positioning an object according to an embodiment of the present application, where the device 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (fully: central processing units, abbreviated as CPU in english) 601 (e.g., one or more processors) and a memory 602, and one or more storage mediums 603 (e.g., one or more mass storage devices) for storing application programs 604 or data 605. Wherein the memory 602 and storage medium 603 may be transitory or persistent storage. The program stored in the storage medium 603 may include one or more modules (not shown), each of which may include a series of instruction operations in the information processing apparatus. Still further, the processor 601 may be arranged to communicate with a storage medium 603 and execute a series of instruction operations in the storage medium 603 on the device 600.
The device 600 may also include one or more wired or wireless network interfaces 607, one or more input/output interfaces 608, and/or one or more operating systems 606, such as Windows Server, mac OS X, unix, linux, freeBSD, etc.
Optionally, determining, based on the first position, a first angle representing a relative position of the target object and the first image capturing device and a horizontal plane includes:
Determining the first included angle according to the field angle, the shooting angle and the first position of the first camera device;
Determining a second included angle representing the relative positions of the target object, the second image capturing device and the horizontal plane according to the second position, wherein the second included angle comprises:
And determining the second included angle according to the field angle, the shooting angle and the second position of the second image pickup device.
Optionally, determining the position of the ejection point of the target object according to the first included angle, the second included angle, and a horizontal distance between the first image capturing device and the second image capturing device includes:
Determining a first ratio of the height of the target object to a first horizontal distance between the target object and the first camera device according to the first included angle, and determining a second ratio of the height to a second horizontal distance between the target object and the second camera device according to the second included angle;
determining part or all of the first horizontal distance and the second horizontal distance and the height according to the first ratio, the second ratio and the preset horizontal distance, wherein the preset horizontal distance is the sum of the first horizontal distance and the second horizontal distance;
Determining a throwing point position of the target object according to the height and the first horizontal distance; or determining the throwing point position of the target object according to the height and the second horizontal distance.
Optionally, after determining the height of the target object, the method includes:
determining the time required by the object to land according to the vertical falling speed of the object parallel to the wall surface and the height of the object;
Determining a first horizontal distance when the target object lands according to the transverse speed of the target object parallel to the wall surface, the current first horizontal distance and the time required by the landing of the target object; and/or determining a second horizontal distance when the target object lands according to the transverse speed of the target object parallel to the wall surface, the current second horizontal distance and the time required by the landing of the target object;
Determining the vertical distance from the target object to the wall surface when the target object lands according to the horizontal speed of the target object vertical to the wall surface and the time required by the target object to land;
And carrying out safety precaution based on the landing position of the target object, wherein the landing position is determined according to part or all of the first horizontal distance and the second horizontal distance and the vertical distance.
Optionally, determining the vertical drop velocity comprises:
Continuously positioning the target object based on at least two continuous first pictures and at least two continuous second pictures, and determining a height difference passing between any two continuous positioning of the target object;
Determining the vertical falling speed according to the height difference and the time interval of any two continuous positioning;
Determining the lateral velocity comprises:
continuously positioning the target object in the first picture and the second picture, and determining a horizontal distance difference passing between any two continuous positioning of the target object;
Determining the transverse velocity according to the horizontal distance difference and the time interval of any two continuous positioning;
Determining the horizontal velocity according to the following manner, including:
Continuously positioning the target object in the first picture and the second picture, and determining the vertical distance difference which passes between any two continuous positioning of the target object and is vertical to the wall surface;
and determining the horizontal speed according to the vertical distance difference and the time interval of any two continuous positioning.
The embodiment of the invention also provides a computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of object positioning provided by the above embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program comprises program instructions, when the program instructions are executed by electronic equipment, the electronic equipment is caused to execute the object positioning method provided by the embodiment.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Drive (SSD)), etc.
The above description has been made in detail for the technical solutions provided by the present application, and specific examples are applied in the present application to illustrate the principles and embodiments of the present application, and the above examples are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.