Disclosure of Invention
In order to solve the technical problem, embodiments of the present invention provide a visualized data input method based on intersection information holographic sensing, so as to solve the problem in the prior art that repeated images and redundant image information exist in image information acquired by a plurality of cameras.
Therefore, the invention provides a visual data input method based on holographic perception of intersection information, which comprises the following steps:
acquiring a plurality of pieces of image information in an intersection, which is acquired by a plurality of pieces of video acquisition equipment, wherein the image information comprises position information of the intersection to be detected and a target object, and the type of the target object comprises a motor vehicle, a non-motor vehicle and a pedestrian;
dividing the intersection to be detected into a plurality of virtual grid areas;
identifying a virtual grid area where a target object in the plurality of image information is located;
calculating the center position of the target object according to the virtual grid area where the target object is located;
calculating the distance between the center positions of any two target objects of the same type;
judging whether the distance is smaller than a preset threshold value or not;
when the distance is smaller than a preset threshold value, combining the two target objects of the same type into the same target object;
converting the plurality of image information into an electronic map of the same intersection information according to the intersection to be detected and the combined target object;
tracking the real-time position of a target object, and extracting the real-time parameter information of the target object;
and marking the real-time parameter information of the target object on the electronic map for real-time tracking display.
Optionally, the side lengths of the grids in different regions are set respectively, and the preset thresholds of different target object types are set respectively.
Optionally, the side length of the grid in the pedestrian crossing area and the pedestrian crossing waiting area is set to be 0.1-0.2m, the grid in the motor vehicle lane area in the intersection is set to be 1-2m, and the grid in the non-motor vehicle lane area is set to be 0.3-1 m.
Optionally, when the type of the target object is an automobile, the corresponding preset threshold is 1-2 m; when the type of the target object is a non-motor vehicle, the corresponding preset threshold value is 0.3-1 m; when the type of the target object is a pedestrian, the corresponding preset threshold value is 0.1-0.2 m;
optionally, when the target object is an automobile, the extracted real-time parameter information includes one or more of the following information:
sequence number: the motor vehicles in a certain direction start to enter the intersection from 0 point 0 min 0 sec; the main function is to count the number from 0 minute and 0 second to the current time at the 0 point of the day and the number of times of parking through the intersection in real time
Time: the time when the motor vehicle enters the intersection;
the type of motor vehicle; large or small vehicles;
vehicle identification information: a license plate or electronic number plate;
the current position of the motor vehicle;
the current speed of the motor vehicle;
the current direction of the motor vehicle;
distance of the motor vehicle from the crossing exit position;
the time when the motor vehicle exits the intersection;
and the motor vehicle exits the exit lane in the exiting direction.
Optionally, the parameter information of the intersection includes one or more of the following information:
the intersection identification information, the phase area, the pedestrian crossing waiting area, the non-motor vehicle lane area, the intersection inner area and the exit lane number.
Optionally, when the target object is a pedestrian, the extracted real-time parameter information includes one or more of the following information:
the distance from the pedestrian to the curb line;
the pedestrian is away from the opposite curb line position.
Optionally, when the target object is a non-motor vehicle, the extracted real-time parameter information includes one or more of the following information:
speed of the non-motor vehicle;
and the time when the non-motor vehicle is driven out of the intersection according to the current driving direction.
Optionally, part or all of the parameter information of the motor vehicle, the parameter information of the intersection, the parameter information of the non-motor vehicle and/or the parameter information of the pedestrian is marked on the electronic map through the electronic map.
Optionally, the intersection comprises an area surrounded by the intersection stop line and its extension line.
The invention provides a visual data input method based on intersection information holographic perception, which firstly obtains a plurality of pieces of image information collected by a plurality of video devices in an intersection, wherein the information comprises intersection images and images of a plurality of target objects such as motor vehicle, non-motor vehicle and pedestrian information, in order to effectively extract information related to traffic control in the intersection and avoid repeated images and redundant images (such as green belts, buildings and the like), the target objects are firstly subjected to de-coincidence, the target objects are de-duplicated in the plurality of images through virtual grids to recover the information of useful objects at the intersection, then the intersection and the target objects are converted into an electronic map form to remove redundant information irrelevant to the traffic control, thus the traffic information at the intersection can be displayed in a real-time electronic map manner to effectively restore the information at the intersection, the problem of repeated images and redundant image information in the image information that a plurality of cameras gathered is solved for the effectual traffic information of crossing is monitored more directly perceivedly and is arrived, thereby improves the intuition and the accuracy of crossing traffic information, provides the prerequisite for effectively carrying out crossing traffic control.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some examples of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The method in the embodiment is applied to the field of traffic control, and is particularly suitable for collecting and displaying effective traffic information of intersections. At present, in a traffic intersection, cameras in multiple traffic directions are generally arranged to collect traffic information in multiple directions, but the information is respectively displayed in a traffic control center, so that the overall traffic condition of the intersection cannot be well restored. The information collected by the plurality of cameras has repeated information, and some irrelevant information such as green belts, trees, buildings and the like also exists in the image collected by the cameras, so that the invalid information in the image information obtained by the traffic control center is much, even a large-scale picture is occupied, and the actual traffic condition of the intersection cannot be directly reflected. Based on this, the present embodiment provides a visualized data input method based on intersection information holographic sensing, which is used for better extracting effective information of an intersection and performing intuitive display, and as shown in fig. 1, the method includes the following steps:
s1, acquiring a plurality of image information in the intersection acquired by a plurality of video acquisition devices, wherein the image information comprises the position information of the intersection to be detected and the target object, and the type of the target object comprises motor vehicles, non-motor vehicles and pedestrians. The extracted information of the intersection to be detected comprises one or more of the following information: the intersection identification information, the phase area, the pedestrian crossing waiting area, the non-motor vehicle lane area, the intersection inner area and the exit lane number.
The intersection comprises an area surrounded by an intersection stop line and an extension line thereof, a pedestrian crosswalk passing through the intersection is arranged in the area formed inside the stop line, pedestrians and non-motor vehicles pass through the intersection through the pedestrian crosswalk or the non-motor vehicle lane (such as a bicycle lane), and the motor vehicles pass through the intersection according to the driving path according to the indication of a traffic light in the intersection. The cameras are arranged in multiple directions of the intersection, multiple pieces of image information can be acquired through the cameras in the multiple directions, the image information comes from different angles, and repeated images exist in the image information. The collected images include images of the intersection, images of motor vehicles, non-motor vehicles and pedestrians passing through the intersection, and redundant images of street lamps, buildings, trees and the like around the intersection. First, the same motor vehicles, non-motor vehicles or pedestrians in the plurality of images need to be merged to restore the actual passing target information of the intersection, and the following steps S2-S7 are to merge the same motor vehicles, non-motor vehicles or pedestrians in the plurality of images of the intersection.
And S2, dividing the intersection to be detected into a plurality of virtual grid areas.
The grid is a virtual grid, intersection information is restored according to intersection data in a plurality of images, namely an area surrounded by an intersection stop line and an extension line thereof, and then the intersection is divided into a plurality of virtual grid areas in the area. The size of the mesh may be set to the same size, or may be set to different sizes for different regions. In order to accurately identify pedestrians, the area setting of the grid needs to be small and not larger than the projection area normally occupied by a person. In this embodiment, the side lengths of the grids in different regions are set respectively, and the preset thresholds of different target object types are set respectively. Since the pedestrian passes through the pedestrian crossing area and the pedestrian crossing waiting area, the side length of the grids in the pedestrian crossing area and the pedestrian crossing waiting area is set to be 0.1-0.2m, preferably 0.1m, and the smaller the set grid is, the more accurate the identification is, but the larger the data amount is. In the area of the motor vehicle lane in the intersection, the grid is set to 1-2m, and since the length of the vehicle is generally more than 2m, the grid can be set to be larger, such as 1m or 1.5m, in the area where the motor vehicle passes. In the area where the non-motor vehicles pass, such as an autodrive lane, the length of the bicycles is more than 1m, so the grid can be set to 0.3-1m, such as 0.5, 0.8, and the like. The grid is set according to the fact that the width of the grid is not larger than the length of the projection of the passing target object on the road, and therefore the repeated target objects can be removed in the mode that the objects in the images with the closer centers are combined into the same object.
And S3, identifying the virtual grid area where the target object in the image information is located.
After the intersection information is gridded, the area of the virtual grid where the target object is located can be obtained according to the position of the target object.
And S4, calculating the center position of the target object according to the virtual grid area where the target object is located.
After the virtual grid area where the target object is located is obtained, the center position is easily determined. The center position here is the center of the projection of the target object in the road surface.
And S5, calculating the distance between the center positions of any two target objects of the same type.
When the duplicate removal of the same data is performed, duplicate data are respectively removed for the same type of the target object, namely for motor vehicles, pedestrians and non-motor vehicles appearing in a plurality of images acquired by different cameras. After the centers of two target objects of the same type are determined, the distance between the center positions can be easily calculated, and can be calculated through the relative coordinate positions and the side lengths of the grids.
And S6, judging whether the distance is smaller than a preset threshold value.
If the distance between the two target objects is small and is smaller than the minimum radius of the object, the two target objects can not be overlapped, so that the two objects can be judged to be actually the same target object, and the motor vehicle, the pedestrian and the non-motor vehicle are combined in the mode. The sizes of different types of target objects are different, so that the preset threshold values are different, and when the type of the target object is an automobile, the corresponding preset threshold value is 1-2 m; when the type of the target object is a non-motor vehicle, the corresponding preset threshold value is 0.3-1 m; when the type of the target object is a pedestrian, the corresponding preset threshold value is 0.1-0.2 m; these data can also be set and adjusted appropriately based on empirical information. When the distance between the target objects is greater than a preset threshold, it is considered to belong to a different object. When the distance is less than the preset threshold, step S7 is performed.
And S7, merging the two target objects of the same type into the same target object when the distance is smaller than a preset threshold value.
At this time, the distance between the two target objects is small and is smaller than the preset threshold value which is reasonably set, that is, the two target objects are overlapped, so that the two objects can be judged to be the same target object actually. Therefore, repeated information of the motor vehicle, the non-motor vehicle and the pedestrian is combined.
And S8, converting the image information into an electronic map of intersection information according to the intersection to be detected and the merged target object.
After the intersection information and the passing target in the intersection are extracted, in order to avoid the influence of redundant data (such as background information of street lamps, buildings, trees and the like) in the data collected by the camera, an electronic map of the intersection information is generated according to the intersection and the target, as shown in fig. 2. Therefore, only effective information of the intersection can be extracted, and the traffic information of the intersection can be more visually represented.
And S9, tracking the real-time position of the target object, and extracting the real-time parameter information of the target object.
Wherein, for the motor vehicle, the extracted real-time parameter information comprises one or more of the following information:
sequence number: the motor vehicles in a certain direction start to enter the intersection from 0 point 0 min 0 sec; the main function is to count the number from 0 minute and 0 second to the current moment at the 0 point today and the number of times of parking through the intersection in real time;
time: the time when the motor vehicle enters the intersection;
the type of motor vehicle; large or small vehicle
Vehicle identification information: a license plate or electronic number plate;
the current position of the motor vehicle;
the current speed of the motor vehicle;
wherein, aiming at the pedestrian, the extracted real-time parameter information comprises one or more of the following information:
the distance from the pedestrian to the curb line;
the pedestrian is away from the opposite curb line position.
Wherein, for the non-motor vehicle, the extracted real-time parameter information comprises one or more of the following information:
speed of the non-motor vehicle;
and the time when the non-motor vehicle is driven out of the intersection according to the current driving direction.
And S10, marking the real-time parameter information of the target object on the electronic map for real-time tracking display.
And marking part or all of the parameter information of the motor vehicle, the parameter information of the intersection, the parameter information of the non-motor vehicle and/or the parameter information of the pedestrian on an electronic map.
According to various data of all motor vehicles and non-motor vehicles which are tracked and detected by videos, radars and the like and extend to the upper and lower intersections from the intersection center to the periphery, pedestrian data of a pedestrian waiting area and a pedestrian crosswalk of the intersection are displayed on an electronic map of the intersection in real time, and the target data are directly output to a control module of a signal control system through a network interface, so that the condition data of '1' and '0' input by a connecting terminal of signal control detection data is promoted to a network input mode with various data being fused, and the signal control level of the intersection is promoted to step a new step.
The invention provides a visual data input method based on intersection information holographic perception, which firstly obtains a plurality of pieces of image information collected by a plurality of video devices at an intersection, wherein the information comprises intersection images and images of a plurality of target objects such as motor vehicle, non-motor vehicle and pedestrian information, in order to effectively extract information related to traffic control in the intersection and avoid repeated images and redundant images (such as green belts, buildings and the like), the target objects are firstly subjected to de-coincidence, the target objects are de-duplicated in the plurality of images through virtual grids to recover the information of useful objects at the intersection, then the intersection and the target objects are converted into an electronic map form to remove redundant information irrelevant to the traffic control, thus the traffic information at the intersection can be displayed in a real-time electronic map manner to effectively restore the intersection information, the problem of repeated images and redundant image information in the image information that a plurality of cameras gathered is solved for the effectual traffic information of crossing is monitored more directly perceivedly and is arrived, thereby improves the intuition and the accuracy of crossing traffic information, provides the prerequisite for effectively carrying out crossing traffic control.
The visualized data input method provided by the invention accurately identifies the position of the vehicle in the intersection through the virtual grid area, so that the same motor vehicles in the images shot by a plurality of cameras at the intersection are combined, the motor vehicles and the positions of the motor vehicles in the intersection are accurately identified, the real-time information of the vehicle is acquired by tracking the positions of the motor vehicles, and basic information is provided for realizing intelligent traffic control.
The method comprises the steps of obtaining data such as the current-time accurate position, the speed, the distance to a stop line, the current-time accurate position, the speed, the distance to a curb and the like of a motor vehicle and a non-motor vehicle of a control target in a set detection range of the intersection through a video tracking or radar tracking technology, sending the data to a data processing unit of an intersection annunciator through a network interface according to a specified protocol format, and simultaneously displaying all state data of the current-time control target and the current-time lamp color state of a signal lamp on an intersection plane electronic map in real time and accurately. Through the intersection plane electronic diagram, the method can visually verify that: whether the actual target positions of all motor vehicles, non-motor vehicles and pedestrians in the intersection detection range at the current moment are consistent with the actual target position displayed on the plane electronic map of the intersection at the current moment or not.
Furthermore, the present invention provides an electronic device for traffic intersection intelligent control, the device comprising:
a processor; and
a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, the processor executing the traffic congestion data analysis method according to the above when the computer readable instructions are executed.
Finally, the present invention provides a storage medium storing computer instructions for causing a computer to perform a method for visualized data input based on holographic perception of intersection information according to the above.
Specifically, fig. 3 shows a schematic structural diagram of acomputer system 600 suitable for implementing the visualized data input method or processor based on holographic perception of intersection information according to the embodiment of the invention, and the system shown in fig. 3 implements corresponding functions of an electronic device and a processor.
As shown in fig. 3, thecomputer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from astorage section 608 into a Random Access Memory (RAM) 603. In theRAM 603, various programs and data necessary for the operation of thesystem 600 are also stored. TheCPU 601,ROM 602, andRAM 603 are connected to each other via abus 604. An input/output (I/O)interface 605 is also connected tobus 604.
The following components are connected to the I/O interface 605: aninput portion 606 including a keyboard, a mouse, and the like; anoutput portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; astorage section 608 including a hard disk and the like; and acommunication section 609 including a network interface card such as a LAN card, a modem, or the like. Thecommunication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. Aremovable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in thestorage section 608 as necessary.
In particular, according to embodiments of the present disclosure, the process described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method of fig. 1. In such embodiments, the computer program may be downloaded and installed from a network through thecommunication section 609, and/or installed from theremovable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be understood that the above embodiments are only examples for clearly illustrating the present invention, and are not intended to limit the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.