Disclosure of Invention
The present application aims to solve at least to some extent one of the above mentioned technical problems.
Therefore, the first purpose of the application is to provide a method for acquiring delay index data of a road intersection.
The second objective of this application is to provide a delay index data acquisition device at road intersection.
A third object of the present application is to provide an intelligent transportation system.
A fourth object of the present application is to provide an electronic device.
A fifth object of the present application is to propose a storage medium.
A sixth object of the present application is to propose a computer program product.
In order to achieve the above object, a method for obtaining delay indicator data of a road intersection is provided in an embodiment of a first aspect of the present application, where a first camera and a second camera are disposed at a traffic light of the road intersection, where the first camera faces an entering lane to which the traffic light controls a flow direction, and the second camera faces an exiting lane to which the traffic light controls a flow direction, the method includes: real-time position and moving speed of each vehicle in the shooting range of the first camera are obtained by identifying the first camera in real time; the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by identifying the second camera in real time; determining the actual passing time used by each vehicle when the vehicle passes through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range; determining a reference transit time when a vehicle passes through the road intersection, which is measured in advance; and acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
According to an embodiment of the application, the real-time position and moving speed of each vehicle in the shooting range of the first camera are obtained by identifying the first camera in real time, and the method comprises the following steps: carrying out vehicle identification on the video data acquired by the first camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data; and generating the three-dimensional longitude and latitude position coordinate and the moving speed of each vehicle according to the three-dimensional longitude and latitude position coordinate of the first camera and the two-dimensional plane position coordinate of each vehicle.
According to an embodiment of the present application, determining an actual passing time used by each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle within the first camera photographing range and the real-time position and moving speed of each vehicle within the second camera photographing range includes: calculating the time used by each vehicle when passing through the shooting range of the first camera according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera; calculating the time used by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera; determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera; and determining the actual passing time of each vehicle when passing through the road intersection according to the time of each vehicle when passing through the first camera shooting range, the time of each vehicle when passing through the second camera shooting range and the blind area time.
According to the embodiment of the application, determining the blind zone time used when each vehicle passes through the blind zone between the first camera and the second camera comprises: acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera; determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera; summing the time spent by all the slow vehicles when the slow vehicles pass through the shooting range of the first camera; and determining the ratio of the obtained sum value to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
According to one embodiment of the application, determining a reference transit time when a vehicle passes through the road intersection as measured in advance comprises: identifying and analyzing video images acquired by the first camera and the second camera within a preset time period to obtain the free flow speed of the corresponding driving-in and driving-out of the road intersection; and determining the reference passing time when the vehicle passes through the road intersection under the free flow scene according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance.
According to an embodiment of the present application, acquiring an average delay time of the road intersection based on the reference passing time and an actual passing time taken by each vehicle when passing through the road intersection includes: determining the delay time of each vehicle when passing through the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection; and acquiring the average delay time of the road intersection by using the delay time when each vehicle passes through the road intersection and the number of all vehicles passing through the road intersection.
The road intersection's that this application second aspect embodiment provided delay index data acquisition device, road intersection's traffic light department sets up first camera and second camera, wherein first camera orientation the lane of driveing into of traffic light control flow direction, the second camera orientation the lane of driveing out of traffic light control flow direction, the device includes: the perception identification module is used for identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle within the shooting range of the first camera, and identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle within the shooting range of the second camera; the first determining module is used for determining the actual passing time used by each vehicle when the vehicle passes through the road intersection according to the real-time position and the moving speed of each vehicle in the first camera shooting range and the real-time position and the moving speed of each vehicle in the second camera shooting range; the second determination module is used for determining the reference passing time when the vehicle passes through the road intersection, which is measured in advance; and the delay index data acquisition module is used for acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
An embodiment of a third aspect of the present application provides an intelligent transportation system, including: the system comprises a first camera and a second camera, wherein the first camera and the second camera are arranged at a traffic light of a road intersection, the first camera faces a traffic light control flowing-direction entering lane, and the second camera faces a traffic light control flowing-direction exiting lane; the server is used for identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera, and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by identifying the second camera in real time, determining the actual passing time used by each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range, and acquiring the average delay time of the road intersection according to the reference passing time of the vehicles passing through the road intersection and the actual passing time of each vehicle passing through the road intersection, which are measured in advance.
An embodiment of a fourth aspect of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method for obtaining delay indicator data of a road intersection according to the embodiment of the first aspect of the present application.
A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for acquiring delay indicator data of a road intersection according to the embodiment of the first aspect of the present application is provided.
In an embodiment of the sixth aspect of the present application, a computer program product is provided, where the computer program is executed by a processor to implement the method for obtaining delay indicator data at a road intersection in the embodiment of the first aspect of the present application.
One embodiment in the above application has the following advantages or benefits: by arranging a first camera and a second camera at each traffic light of the road intersection, wherein the first camera faces an entering lane of the traffic light for controlling the flow direction, and the second camera faces an exiting lane of the traffic light for controlling the flow direction, then, by respectively identifying the first camera and the second camera, the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed in the shooting range of the second camera are obtained, then, the actual passing time used when each vehicle passes through the road intersection can be determined according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed in the shooting range of the second camera, and then, according to the pre-measured reference passing time and the actual passing time, and acquiring the average delay time of the road intersection. The camera is arranged at the traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, moreover, the video is collected by the camera to be in the signal period level or different time window levels, the diversity of the data is ensured, and the usability of the method is ensured.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First, it should be noted that, in the embodiment of the present application, a first camera and a second camera are disposed at a traffic light at a road intersection, where the first camera faces an entering lane of the traffic light for controlling a flow direction, and the second camera faces an exiting lane of the traffic light for controlling the flow direction. For example, as shown in fig. 1, three cameras can be arranged at the position of one traffic light for controlling the flow direction, wherein the number of the second cameras can be two, the left front camera and the right front camera are respectively seen, and the first camera is seen from the back. And carrying out real-time video acquisition on scenes in respective shooting ranges through the cameras. The method comprises the steps of identifying based on a video collected in real time, and calculating the average delay of the urban road intersection based on a video identification result.
Example one
As shown in fig. 2, the method for acquiring delay indicator data of a road intersection may include:
andstep 210, identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera.
Optionally, vehicle identification is performed on video data acquired by the first camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data, and a three-dimensional longitude and latitude position coordinate and a moving speed of each vehicle in a shooting range of the first camera are generated according to the three-dimensional longitude and latitude position coordinate of the first camera and the two-dimensional plane position coordinate of each vehicle.
That is, the video stream captured by the first camera can identify the coordinates of the two-dimensional plane (such as 1080 × 720 picture) corresponding to the vehicle through the computer sensing module, and the three-dimensional longitude and latitude position coordinates and the moving speed of the vehicle are converted through the actual three-dimensional longitude and latitude position coordinates of the first camera and the identified position coordinates of the two-dimensional plane of the vehicle.
Andstep 220, identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the second camera.
Optionally, vehicle identification is performed on the video data acquired by the second camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data, and a three-dimensional longitude and latitude position coordinate and a moving speed of each vehicle within a shooting range of the second camera are generated according to the three-dimensional longitude and latitude position coordinate of the second camera and the two-dimensional plane position coordinate of each vehicle.
Andstep 230, determining the actual passing time used by each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range.
Optionally, a first time used when each vehicle passes through the first camera shooting range and a second time used when each vehicle passes through the second camera shooting range are respectively calculated according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range, a blind zone time used when each vehicle passes through a blind zone between the first camera and the second camera is estimated, and an actual passing time used when each vehicle passes through the road intersection is determined according to the first time, the second time and the blind zone time. Therefore, the dead zone time used when the vehicle passes through the dead zone area of the camera is estimated, and the actual passing time used when the vehicle passes through the road intersection is calculated by utilizing the first time used when the vehicle passes through the shooting range of the first camera, the dead zone time used when the vehicle passes through the dead zone area and the second time used when the vehicle passes through the shooting range of the second camera, so that the calculation result is more in line with the real situation, and the accuracy of the result is greatly improved.
Instep 240, a reference transit time is determined for a vehicle passing through the road intersection, which is measured in advance. In an embodiment of the present application, the reference passing time may be used to indicate a time that a vehicle should pass when passing through the road intersection in a free-flow scene.
Optionally, the video images acquired by the first camera and the second camera within a preset time period are identified and analyzed to obtain free flow speeds of the road intersection corresponding to the entrance and the exit, and then the reference passing time of the vehicle passing through the road intersection in the free flow scene can be determined according to the free flow speeds, the distance of the first camera shooting range and the distance of the second camera shooting range.
As an example, the preset time period may be 5 to 7 am of a week. For example, the video images acquired by the first camera and the second camera in a preset time period (for example, 5 to 7 am of a week) may be acquired, the video images acquired by the first camera and the second camera are respectively identified and analyzed to obtain the free flow speed of the vehicle entering and exiting the road intersection, then, the time that the vehicle should pass through the road intersection in the free flow scene is calculated according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance, and the time that the vehicle should pass through the road intersection in the free flow scene is taken as the reference passing time.
And step 250, acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
In the embodiment of the application, the delay time of each vehicle passing through the road intersection can be determined according to the reference passing time and the actual passing time of each vehicle passing through the road intersection, and the average delay time of the road intersection can be obtained by using the delay time of each vehicle passing through the road intersection and the number of all vehicles passing through the road intersection.
For example, the actual passing time of each vehicle passing through the road intersection may be used, and the reference passing time is subtracted, so that each obtained difference is the delay time when the corresponding vehicle passes through the road intersection, that is, the delay time when each vehicle passes through the road intersection is the actual passing time-reference passing time used when each vehicle passes through the road intersection. Then, the delay time when each vehicle passes through the road intersection can be summed, the obtained sum value and the number of all vehicles passing through the road intersection are subjected to division operation, and the obtained numerical value is the average delay time of the road intersection, namely
The method for acquiring the delay index data of the road intersection comprises the steps of arranging a first camera and a second camera at each traffic light of the road intersection, wherein the first camera faces an entering lane of the traffic light for controlling the flow direction, and the second camera faces an exiting lane of the traffic light for controlling the flow direction, so that the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by respectively identifying the first camera and the second camera, and then the actual passing time of each vehicle when passing through the road intersection can be determined according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera, then, the average delay time of the road intersection is obtained according to the reference passing time and the actual passing time which are measured in advance. The camera is arranged at the traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, in addition, the signal cycle grade can be achieved by collecting the video through the camera, or different time window grades can be achieved, the diversity of the data is ensured, and the method usability is ensured.
Example two
Fig. 3 is a schematic diagram according to a second embodiment of the present application. As shown in fig. 3, the method for acquiring delay indicator data of a road intersection may include:
and 310, identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera.
And 320, identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the second camera.
It should be noted that, in the embodiment of the present application, the implementation process ofstep 310 and step 320 may refer to the description of the implementation process ofstep 210 and step 220, and is not described herein again.
And step 330, calculating the time taken by each vehicle to pass through the first camera shooting range according to the real-time position and the moving speed of each vehicle in the first camera shooting range.
Optionally, the moving distance of each vehicle in the first camera shooting range is calculated according to the real-time position of each vehicle in the first camera shooting range, and the time taken by each vehicle to pass through the first camera shooting range is calculated according to the moving distance of each vehicle in the first camera shooting range and the moving speed of each vehicle in the first camera shooting range. For example, the moving distance of the vehicle in the first camera shooting range can be calculated according to the position of the vehicle starting to enter the first camera shooting range and the position of the vehicle exiting the first camera shooting range, and the time taken by the vehicle to pass through the first camera shooting range can be calculated according to the moving distance and the moving speed of the vehicle.
And step 340, calculating the time used by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera.
Optionally, the moving distance of each vehicle in the second camera shooting range is calculated according to the real-time position of each vehicle in the second camera shooting range, and the time taken by each vehicle when passing through the second camera shooting range is calculated according to the moving distance and the moving speed of each vehicle in the second camera shooting range.
Step 350, determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera.
Optionally, for a blind area which cannot be identified between the first camera and the second camera, the time used when the slow vehicles participating in queuing pass through the shooting range of the first camera can be adopted to calculate the blind area time required when the vehicles pass through the blind area.
As an example, as shown in fig. 4, the specific implementation process of determining the blind area time used by each vehicle to pass through the blind area between the first camera and the second camera may be as follows:
and step 410, acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera.
And step 420, determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera.
And step 430, summing the time spent by all the slow vehicles passing through the shooting range of the first camera.
And step 440, determining the ratio of the obtained sum to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
Therefore, the time used when the slow vehicles participating in the queuing pass through the shooting range of the first camera is summed, the sum is divided by the total number of all vehicles, the obtained ratio is the blind area time used when the slow vehicles pass through the blind area between the first camera and the second camera, and the blind area time needed when the slow vehicles participating in the queuing pass through the blind area is calculated by adopting the time used when the slow vehicles participating in the queuing pass through the shooting range of the first camera, so that the estimation result is closer to the real situation.
And step 360, determining the actual passing time used when each vehicle passes through the road intersection according to the time used when each vehicle passes through the first camera shooting range, the time used when each vehicle passes through the second camera shooting range and the blind area time.
As an example, the actual passing time of each vehicle is the time taken by each vehicle when passing through the first camera shooting range + the blind area time + the time taken by each vehicle when passing through the second camera shooting range.
Instep 370, a reference transit time when the vehicle passes through the road intersection, which is measured in advance, is determined.
And 380, acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
It should be noted that, in the embodiment of the present application, the implementation process ofstep 370 and step 380 may refer to the description of the implementation process ofstep 240 and step 250, and is not described herein again.
The method for obtaining delay index data of a road intersection according to the embodiment of the application can calculate the time taken by each vehicle when passing through the shooting range of the first camera according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera, calculate the time taken by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera, determine the dead zone time taken by each vehicle when passing through the dead zone between the first camera and the second camera, determine the actual passing time taken by each vehicle when passing through the road intersection according to the time taken by each vehicle when passing through the shooting range of the first camera, the time taken by the second camera and the dead zone time, thereby estimating the dead zone time taken by the vehicle when passing through the dead zone area of the camera, and then the actual passing time used when the vehicle passes through the road intersection is calculated by utilizing the first time used when the vehicle passes through the first camera shooting range, the blind area time used when the vehicle passes through the blind area and the second time used when the vehicle passes through the second camera shooting range, so that the calculation result is more consistent with the real situation, and the accuracy of the result is greatly improved.
Fig. 5 is a schematic diagram according to a fourth embodiment of the present application. As shown in fig. 5, the delay indexdata acquisition device 500 for a road intersection may include: aperceptual identification module 510, afirst determination module 520, asecond determination module 530, and a delinquent indexdata acquisition module 540.
Specifically, theperception identification module 510 is configured to obtain a real-time position and a moving speed of each vehicle within a shooting range of the first camera by performing real-time identification on the first camera, and obtain a real-time position and a moving speed of each vehicle within a shooting range of the second camera by performing real-time identification on the second camera.
As an example, the sensing and recognizingmodule 510 performs vehicle recognition on the video data collected by the first camera, obtains two-dimensional plane position coordinates of each vehicle in the video data, and generates three-dimensional longitude and latitude position coordinates and moving speed of each vehicle according to the three-dimensional longitude and latitude position coordinates of the first camera and the two-dimensional plane position coordinates of each vehicle.
The first determiningmodule 520 is used for determining the actual passing time of each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle within the first camera shooting range and the real-time position and moving speed of each vehicle within the second camera shooting range.
As an example, as shown in fig. 6, the first determiningmodule 520 may include: afirst calculation unit 521, asecond calculation unit 522, afirst determination unit 523, and asecond determination unit 524. Thefirst calculation unit 521 is used for calculating the time taken by each vehicle when the vehicle passes through the first camera shooting range according to the real-time position and the moving speed of each vehicle in the first camera shooting range; the second calculatingunit 522 is used for calculating the time taken by each vehicle to pass through the second camera shooting range according to the real-time position and the moving speed of each vehicle in the second camera shooting range; the first determiningunit 523 is configured to determine a blind area time used when each vehicle passes through a blind area between the first camera and the second camera; the second determiningunit 524 is configured to determine the actual passing time taken for each vehicle to pass through the road intersection, based on the time taken for each vehicle to pass through the first camera shooting range, the time taken for each vehicle to pass through the second camera shooting range, and the blind zone time.
In the embodiment of the present application, a specific implementation process of the first determiningunit 523 determining the blind area time used when each vehicle passes through the blind area between the first camera and the second camera may be as follows: acquiring a first video acquired by a first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera; determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera; summing the time spent by all the slow cars passing through the shooting range of the first camera; and determining the ratio of the obtained sum value to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
Thesecond determination module 530 is used to determine a reference transit time when a vehicle passes through a road intersection, which is measured in advance. As an example, the second determiningmodule 530 performs recognition analysis on video images captured by the first camera and the second camera within a preset time period to obtain a free flow speed corresponding to the entrance and exit of the road intersection, and then determines a reference passing time when the vehicle passes through the road intersection in a free flow scene according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance.
The delay indexdata obtaining module 540 is configured to obtain an average delay time of the road intersection according to the reference passing time and an actual passing time used when each vehicle passes through the road intersection. As an example, the delay indexdata obtaining module 540 determines a delay time when each vehicle passes through the road intersection based on the reference passing time and the actual passing time used when each vehicle passes through the road intersection, and then obtains an average delay time of the road intersection using the delay time when each vehicle passes through the road intersection and the number of all vehicles passing through the road intersection.
The device for acquiring the delay index data of the road intersection comprises a first camera and a second camera arranged at each traffic light of the road intersection, wherein the first camera faces an entering lane of the traffic light for controlling the flow direction, the second camera faces an exiting lane of the traffic light for controlling the flow direction, so that the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by respectively identifying the first camera and the second camera, and then the actual passing time used when each vehicle passes through the road intersection can be determined according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera, then, the average delay time of the road intersection is obtained according to the reference passing time and the actual passing time which are measured in advance. The camera is arranged at the traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, in addition, the signal cycle grade can be achieved by collecting the video through the camera, or different time window grades can be achieved, the diversity of the data is ensured, and the method usability is ensured.
Fig. 7 is a schematic diagram according to a sixth embodiment of the present application. As shown in fig. 7, theintelligent transportation system 700 may include: afirst camera 710, asecond camera 720, and aserver 730. Thefirst camera 710 and thesecond camera 720 may be disposed at a traffic light at a road intersection; thefirst camera 710 faces the traffic light to control the incoming lane, and thesecond camera 720 faces the traffic light to control the outgoing lane.
Theserver 730 may be configured to obtain a real-time position and a moving speed of each vehicle within a shooting range of thefirst camera 710 by performing real-time recognition on thefirst camera 710, obtain a real-time position and a moving speed of each vehicle within a shooting range of thesecond camera 720 by performing real-time recognition on thesecond camera 720, determine an actual transit time for each vehicle to pass through the road intersection according to the real-time position and the moving speed of each vehicle within the shooting range of thefirst camera 710 and the real-time position and the moving speed within the shooting range of thesecond camera 720, and obtain an average delay time for the road intersection according to a reference transit time measured in advance when the vehicle passes through the road intersection and the actual transit time for each vehicle to pass through the road intersection.
That is, the server according to the embodiment of the present application may implement the method for acquiring delay indicator data at a road intersection according to any one of the embodiments described above.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 8, the present invention is a block diagram of an electronic device according to the method for acquiring delay indicator data at a road intersection according to the embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 8, the electronic apparatus includes: one ormore processors 801,memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, if desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 8 illustrates an example of aprocessor 801.
Thememory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the method for acquiring the delay index data of the road intersection provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the delay index data acquisition method for a road intersection provided by the present application.
Thememory 802 is a non-transitory computer-readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the delay index data acquisition method for a road intersection in the embodiment of the present application (for example, theperception identification module 510, thefirst determination module 520, thesecond determination module 530, and the delay indexdata acquisition module 540 shown in fig. 5). Theprocessor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in thememory 802, that is, implements the method for obtaining delay indicator data of a road intersection in the above-described method embodiments.
Thememory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of electronic equipment of a delay index data acquisition method for a road intersection, and the like. Further, thememory 802 may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, thememory 802 may optionally include a memory remotely located from theprocessor 801, which may be connected to the electronics of the delay indicator data acquisition method at the intersection via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for acquiring delay index data of a road intersection may further include: aninput device 803 and anoutput device 804. Theprocessor 801, thememory 802, theinput device 803, and theoutput device 804 may be connected by a bus or other means, as exemplified by the bus connection in fig. 8.
Theinput device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of an electronic apparatus of a delay index data acquisition method for a road intersection, for example, an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. Theoutput devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: embodied in one or more computer programs that implement the method of intersection delay indicator data acquisition described in the embodiments above when executed by a processor, the one or more computer programs being executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the camera is arranged at each traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, moreover, the video is collected by the camera to be in the signal period level or in different time window levels, the diversity of the data is ensured, and the usability of the method is ensured.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.