Disclosure of Invention
The invention aims to provide a visual simulation method and a visual simulation system for real-time traffic data, so as to improve the problems.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
a visual simulation method of real-time traffic data comprises the following steps:
building a digital twin city based on the real city;
receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence, and overlapping N frames of images between any two adjacent time periods;
the method comprises the steps of obtaining vehicle information in videos of each time period, storing the vehicle information of each vehicle into different arrays respectively, wherein at least a starting frame number and a stopping frame number of the vehicle are recorded in each array, and storing all the arrays of one time period into one data table to obtain a plurality of data tables;
sequentially traversing the data table according to the time sequence, starting traversing by taking the (N + 1) th frame as a first frame when traversing to the data table, reading the vehicle information in the data table, and mapping each vehicle in the digital twin city;
and when traversing to the last frame of the data table, entering the next data table, starting traversing by taking the (N + 1) th frame of the data table as the first frame, continuously mapping the vehicle in the digital twin city, and generating a continuous real-time traffic simulation video.
Further, the receiving a real-time road monitoring video stream corresponding to the digital twin city in real time, and dividing a preset driving area in the digital twin city according to the monitoring area of the monitoring video stream includes:
determining roads needing to be monitored in a real city;
receiving a monitoring video stream shot by the road in real time;
and positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city.
Further, the dividing the monitoring video stream into a plurality of videos in the same time period in time sequence, where N frames of images are overlapped between any two adjacent time periods, includes:
setting the duration of each time interval as T and the number of overlapped frames as N;
determining a start timestamp for each time period
And a termination timestamp
N frames of images are overlapped between the ending time stamp of the previous time period and the starting time stamp of the next time period;
according to the starting time stamp of each time interval
And a termination timestamp
The monitoring video stream is divided into a plurality of videos in the same time period.
Further, the obtaining of the vehicle information in the video of each time interval stores the vehicle information of each vehicle into different arrays respectively, each array at least records the starting frame number and the ending frame number of the vehicle, and all the arrays of one time interval are stored into one data table to obtain a plurality of data tables, including:
utilizing an AI recognition algorithm to divide a video of a period of time into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame of image and actual offset coordinates;
calculating the running speed of the vehicle in the current frame image by using the front and rear N frames of images of the current frame image, and tracking the running track of the vehicle;
storing vehicle information of a vehicle into an array, each array at least comprising 8 groups of data, respectively: the starting frame number when the vehicle appears, the ending frame number when the vehicle disappears, the vehicle type, the vehicle color, the license plate number, the X-axis offset coordinate, the Y-axis offset coordinate and the running speed; the X-axis offset coordinate, the Y-axis offset coordinate and the running speed all comprise data of all frame images from a starting frame number to an ending frame number;
and obtaining a plurality of arrays, storing the arrays into a data table, and recording the total frame number of the data table.
Further, the upper left corner of the video picture is used as a coordinate origin;
selecting at least two characteristic points in a video picture as a reference point and a mark point, and calculating the actual distance of a unit pixel by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
and calculating the offset coordinate of the vehicle, namely the actual distance of the vehicle relative to the mark point by using the first pixel coordinate and the actual distance of the unit pixel.
Further, the data table is traversed sequentially according to the time sequence, and when the data table is traversed, the data table is traversed
Taking the (N + 1) th frame as the first frameBegin traversal, read data table
The vehicle information of (1), mapping data of each vehicle in the digital twin city, comprising:
data table for traversing a period from the N +1 th frame image
Reading the current ergodic frame number;
when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
determining a coordinate position generated by a simulated vehicle in a digital city scene according to the offset coordinate of the vehicle, and when the coordinate position is outside a preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
and when the number of the ending frames is traversed, destroying the simulated vehicles which normally disappear in the preset driving area.
Further, when the coordinate position is outside the preset driving area, the simulated vehicle is generated after correcting the coordinate position of the simulated vehicle, including:
if the coordinate position in the current frame image is outside the preset driving area, recalculating the offset coordinate according to the driving speed set by the previous frame image, and continuously generating the simulated vehicle at the coordinate position corresponding to the offset coordinate.
Further, when traversing to the number of the termination frames, destroying the simulated vehicle which normally disappears in the preset driving area, including:
when traversing to the number of the termination frames, judging whether the simulated vehicle disappears outside a preset driving area or not;
if yes, destroying the simulated vehicle;
if not, recalculating the set running speed of the previous frame image to obtain an offset coordinate;
judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
and if not, continuing to generate the simulated vehicle at the offset coordinate.
Further, the traversal to the data table
The last frame of the frame, enter the next data table
To data sheet
The (N + 1) th frame as the first frame starts traversing, vehicles are continuously mapped in the digital twin city, and a continuous real-time traffic simulation video is generated, and the method comprises the following steps:
judging whether to traverse to the data table
The last frame of (a);
if yes, jumping to the data table
The (N + 1) th frame is continuously traversed;
when traversing to the initial frame number of an array, reading the vehicle information in the array, and judging whether the license plate number of the vehicle is in contact with the data table
The license plate numbers of the simulated vehicles are the same;
if the data are different, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
A visual simulation system of real-time traffic data, the system comprising:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence, and overlapping N frames of images between any two adjacent time periods;
a storage unit: the method comprises the steps of obtaining vehicle information in videos of each time period, storing the vehicle information of each vehicle into different arrays respectively, wherein at least a starting frame number and a stopping frame number of the vehicle are recorded in each array, and storing all the arrays of one time period into one data table to obtain a plurality of data tables;
a video generation unit: sequentially traversing the data table according to the time sequence, and when the data table is traversed to
When the data table is read, the traversal is started by taking the (N + 1) th frame as the first frame
The vehicle information in (1), mapping each vehicle in the digital twin city;
a video connection unit: when traversing to the data table
The last frame of the frame, enter the next data table
To data sheet
As the first frame, starts a passAnd continuing to map the vehicles in the digital twin city to generate continuous real-time traffic simulation videos.
The invention has the beneficial effects that:
1. the method comprises the steps of receiving road monitoring video stream in real time, cutting off the received monitoring video stream when the time length of the received monitoring video stream reaches a preset time length, and dividing the received monitoring video stream into a time interval, namely continuously dividing the monitoring video stream received in real time into a plurality of time intervals according to the preset time length; meanwhile, after the video in the first time period is divided, the vehicle information in the video is extracted, the video of one vehicle is independently stored in an array, all the arrays in one time period are stored in the UE4 in a data table mode to form a data table, once the data table is formed, the data table can be traversed, and the simulated vehicle is generated and mapped in the digital twin city; after the video in the first time interval is processed, the video in the second time interval is also divided, and the second video and the subsequent third and fourth videos are processed according to the method until the transmission of the monitoring video stream is stopped.
2. According to the invention, the real-time monitoring video stream and the three-dimensional model of the monitored scene are accurately fused in real time, so that a plurality of monitoring video streams distributed at different positions and different angles can be brought into the full-space three-dimensional scene of the unified space reference, and the functions of checking, replaying, tracking a monitoring route, tracking a target and the like of the monitoring video stream at any position and at any angle can be realized. The method is applied to city management, can provide important basic and operation data for traffic planning, traffic management and road maintenance departments, and provides important technical means and evidences for quickly correcting traffic violation behaviors, quickly detecting traffic accident escape and motor vehicle robbery cases, such as: suspect vehicle deployment and control, hit-and-run vehicle tracking, incapability of identifying abnormal event accidents by human eyes, illegal vehicles and the like.
3. When the video is cut off, N frames of images are overlapped between the videos in two adjacent time periods, and the inter-frame correlation between the front and rear N frames of images and the current image is utilized to determine whether the running track of the same vehicle exists or not so as to track the vehicle, ensure that the information of the vehicle is correctly recorded in an array, and prevent the follow-up data calling from making mistakes, thereby causing the interruption of the visual simulation data and the intermittent simulation condition. N frames of images are overlapped between the videos in two adjacent time periods, so that the effect of simulating the videos is guaranteed.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers or letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example 1
As shown in fig. 1, the present embodiment provides a visual simulation method of real-time traffic data, which includes the following steps:
s1, building a digital twin city based on a real city;
specifically, a map of a city is obtained, and according to the map data and a real scale, a digital twin city is built in a development platform, where the development platform includes UE4, Unity, and OSG, and preferably, a UE4 development platform is adopted in this embodiment.
S2, receiving real-time road monitoring video streams corresponding to the digital twin city in real time, dividing a preset driving area in the digital twin city according to the monitoring area of the monitoring video streams, and automatically identifying whether a simulated vehicle is generated outside the preset driving area in the visualization process so as to correct error data;
based on the above embodiment, the S2 specifically includes:
s21, determining a road needing to be monitored in a real city, and building a monitoring real-time monitoring network storage and server above the road, wherein the monitoring comprises unmanned aerial vehicle aerial photography and a road monitoring camera; preferably, the cameras are arranged at positions between the roads on the two sides, so that the vehicles on the roads on the left side and the right side are shot as clearly as possible, and the heads or the tails of the vehicles need to be shot, so that the AI identification algorithm in the subsequent steps can identify the license plate numbers conveniently, and the higher the definition is, the more accurate the identification is;
s22, receiving the monitoring video stream shot by the road in real time, specifically, receiving a transmission protocol adopted by the monitoring video stream in real time according to a client request, selecting a proper coding and decoding mode, and transmitting the real-time dynamic video stream of the camera to the client according to a frame request;
s23, positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city, wherein the part of the road extending out of the monitoring video stream does not belong to the preset driving area.
S3, dividing the monitoring video stream into a plurality of videos in the same time interval according to a time sequence, and overlapping N frames of images between any two adjacent time intervals;
based on the above embodiment, the S3 specifically includes the following steps:
s31, setting the duration of each time interval as T and the number of overlapped frames as N;
s32, determining the starting time stamp of each time interval
And a termination timestamp
N frames of images are overlapped between the end timestamp of the previous period and the start timestamp of the next period, and it should be noted that here is repeatedStacking;
specifically, for example: the start time stamp of the first time period is
Then the time stamp is terminated
=
+ T; then calculates the duration of N frames of images as
Then the start time stamp of the second period
=
-
End time stamp
=
+ T, and so on;
and S33, dividing the monitoring video stream into a plurality of videos in the same time interval according to the starting time stamp and the ending time stamp of each time interval.
In this embodiment, the time is counted from the time stamp of the received surveillance video stream, when the time length of the received video stream is T, the video in the first time period is divided, then the video in the first time period is processed by the subsequent steps S4-S6, at the same time, the video in the second time period is divided, after the video in the first time period is processed, the video in the second time period is processed, and so on, the video stream division process and the video processing process are performed synchronously, so as to improve the processing efficiency of the video.
S4, obtaining vehicle information in the video of each time period, respectively storing the vehicle information of each vehicle into different arrays, wherein at least the initial frame number and the final frame number of the vehicle are recorded in each array, and all the arrays of one time period are stored into one data table to obtain a plurality of data tables;
based on the above embodiment, the S4 specifically includes the following steps:
s41, segmenting a video in a period of time into a plurality of frame images by utilizing an AI (Artificial intelligence) recognition algorithm, and recognizing vehicle information in each frame image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame image and actual offset coordinates;
specifically, the S41 includes the following steps:
s411, inputting a video in a time interval into an AI identification algorithm, and automatically segmenting the monitoring video stream into single-frame images by the AI identification algorithm and storing the single-frame images according to a time sequence; specifically, in the obtained single-frame image, each frame of image can be continuously obtained, or a plurality of frames of images can be obtained at intervals, and then the obtained frames of images are stored for the next identification operation;
s412, identifying all vehicles in each frame of image, and the types and colors of the vehicles, and identifying the identified vehicles;
specifically, an optimized Complecar data set is used for training a YOLO v3 algorithm, wherein the Complecar data set is a public data set which is largest in scale and most abundant in category and is used for evaluating fine recognition of vehicles at present, vehicle images are acquired by the data set through a network and monitoring equipment, wherein the number of the network images is 136726, 1716 vehicle types of 163 automobile manufacturers are covered, and the number of the monitoring images is 44481 vehicle front images which comprise 281 vehicle types. Adopting a trained YOLO v3 algorithm to perform full-frame detection on frame images, framing all recognized vehicles with rectangular frames, and recognizing the type and color of each vehicle, wherein the type of the vehicle is self-defined, and in the embodiment, the self-defined type of the vehicle recognition includes: the cross-country vehicle, the car, the minibus, the freight train, the bus, the motorcycle, according to actual need, the kind that the increase vehicle discerned.
S413, identifying the license plate number of the vehicle;
specifically, a sample set is made, a pre-trained target detection network based on a YOLO v3 algorithm is adopted based on the fusion characteristics of video images, after offline learning is carried out on the sample set, the detection of the license plate number of a vehicle in a frame image is realized, and the vehicle and the corresponding license plate number are subjected to associated marking;
s414, determining pixel coordinates of the vehicle in each frame of image, and calculating actual offset coordinates according to the pixel coordinates;
specifically, the step S414 includes the following steps:
s4141, taking the upper left corner of the video picture as a coordinate origin O (0, 0);
s4142, selecting at least two feature points in the video picture as mark points
And a reference point
Using said marking points
And a reference point
Calculating the actual distance of the unit pixel;
specifically, since the shooting angle of view of the surveillance video stream is fixed, the video frame of each frame of image is the same. It is only necessary to intercept any frame of video image from the monitoring video stream and search for feature points, which have characteristics convenient to determine, such as: a crossing point of two lane boundaries, a midpoint or a terminal point of a lane boundary, a connection point of a road sign and the ground or a connection point of a street lamp and the ground; wherein at least one characteristic point is used as a mark point, and the same position point is arranged in the scene of the digital twin city
Marking, establishing a mapping relation between a video picture and a scene, and calculating the actual distance of a unit pixel by using a reference point and a mark point;
specifically, the actual distance of the unit pixel on the x-axis is calculated as follows:
wherein, the
For the actual length of a unit pixel in the x-axis direction of any pixel point P,
is composed of
And
the actual distance between them on the x-axis,
is composed of
And
the pixel distance in the x-axis direction between,
the adjustment parameter is used to adjust the actual distance between the unit pixels near the lens and the unit pixels far from the lens because the video capture angle is oblique, and the adjustment parameter is used to adjust the actual distance between the unit pixels near the lens and the unit pixels far from the lens
The problem can be solved;
specifically, the
The method can be obtained according to the longitude and latitude of two points, and the calculation process is as follows:
obtained by the formulae (2) to (4):
wherein, R is the radius of the earth,
the latitude of two points is respectively shown,
is the difference in longitude between two points.
In particular, the regulating parameter
The calculation process of (2) is as follows:
wherein H is the total length of pixels on the y-axis,
the pixel coordinate of any pixel point P on the y axis;
similarly, the actual distance of the unit pixel on the y axis is calculated by:
wherein, the
The actual length of the unit pixel in the y-axis direction for any pixel point P,
is composed of
And
the actual distance between them on the y-axis,
is composed of
And
the pixel distance in the y-axis direction between,
the distance is an adjusting parameter;
referring to fig. 5 of the drawings, a schematic diagram of a display device,
and
respectively, the mark points selected in the embodiment
And a reference point
The pixel coordinates thereof with respect to the origin of coordinates O are respectively
(1045,475) and
(26,653)。
s4143, determining first pixel coordinates of the vehicle relative to a coordinate origin in the current frame image
;
S4144, calculating an offset coordinate of the vehicle using the first pixel coordinate and the actual distance of the unit pixel
I.e. vehicle relative to a landmark
The actual distance of (2) is calculated as follows:
wherein, the
The distance is a regulating parameter.
S42, calculating the running speed of the vehicle in the current frame image by using the front and back N frame images of the current frame image, and tracking the running track of the vehicle;
specifically, the S42 includes the following steps:
s421, respectively obtaining offset coordinates of the vehicle in the front frame image and the back frame image of the current frame;
in this embodiment, the offset coordinate of the first 5 frames (F-5) and the offset coordinate of the last 5 frames (F + 5) of the current frame F are obtained;
s422, by using a displacement-velocity formula:
calculating the running speed of the vehicle in the current frame image, wherein v represents the running speed of the vehicle in the current frame image, x represents the distance between the offset coordinate of the first 5 frames (F-5) and the offset coordinate of the last 5 frames (F + 5), and t represents the time interval between the two frame images of the first 5 frames and the last 5 frames;
s423, using continuous first N frame images, (it should be noted that if the interval is stored at intervals, the number of interval frames is 5, the continuous first N/5 frame images are used for tracking, and the same applies to the first and second N frames), using continuous first 5 frame images to track the driving track of the vehicle by using the STRCF algorithm, and drawing the driving track of each vehicle in the frame images, please refer to fig. 6, in which not only the driving track of each vehicle is drawn, but also the driving speed and the offset coordinates of the vehicle are drawn;
specifically, the calculation process of the STRCF algorithm is as follows:
wherein E is the total number of the feature maps, the feature maps are the feature maps of the monitoring video stream fused with the convolution nerve features and the HOG nerve features,
t is the total number of frames,
for the t-th frame of the image,
,
,
for the e-th feature map in the t-th frame image,
showing the learned correlation filter of the e-th feature map feature,
the label of the image of the t-th frame, w is a space regularization weight function, f is a correlation filter learned by the t-th frame,
the learned correlation filter for the t-1 th frame,
(ii) a μ denotes the temporal regularization factor, operator · denotes the Hadamard product,. denotes the convolution operation,
representing the modulus of the vector.
S43, storing the vehicle information of one vehicle into one array, wherein each array at least comprises 8 groups of data, which are respectively as follows: starting frame number when vehicle appears, ending frame number when vehicle disappears, vehicle type, vehicle color, license plate number, X-axis offset coordinate: (
) Y-axis offset coordinates of (A), (B), (C
) Driving speed; wherein, the X-axis offset coordinate, the Y-axis offset coordinate and the running speed are all included from the beginningData of all frame images between the frame number and the termination frame number;
s44, obtaining a plurality of arrays, storing the arrays into a data table, recording the total frame number of the data table, obtaining a csv file storing vehicle information after processing a monitoring video stream by an AI (analog to digital) recognition algorithm, and storing the csv file into the UE4 in the form of the data table;
s45, repeating the steps, storing the vehicle information in each time interval into a data table to obtain a plurality of data tables D, wherein the data tables D are a continuous generation process, after the first data table is generated, the step S5 is started, the data in the first data table are read to perform visual simulation in the digital twin city, and therefore, the time difference between the generated simulated video and the video in the real road in the digital twin generation is the video processing time + T in one time interval, and the speed of rendering one frame of image can be increased to 0.2/frame by utilizing the step S4 in the embodiment.
The specific embodiment is as follows: setting the time length T =2s of a video in a period, setting the frame rate of a surveillance video stream to 30 frames/s, then 60 frames of images exist in a video, storing one frame of image every 5 frames, then storing 12 frames of images, calculating the time required for processing a video to be 2.4s, and setting the time difference between a simulated video and a video in a real road to be 4.4 s.
S5, traversing the data table in sequence according to the time sequence, and traversing to the data table
When the data table is read, the traversal is started by taking the (N + 1) th frame as the first frame
The vehicle information in (1), mapping each vehicle in the digital twin city;
referring to fig. 2, based on the above embodiment, the step S5 specifically includes the following steps:
s51, traversing a data table of a period from the N +1 th frame image
Reading the current ergodic frame number;
as shown in step S42, the driving speed of the vehicle in the current frame image is calculated by using the first N frame image and the last N frame image, and the driving track of the vehicle is tracked by using consecutive first N frames, so that two adjacent videos need to be separated by N frame images. During the visualization process, to avoid generating overlapped videos, the data in one data table is traversed by using the N +1 th frame image as the first frame (if the images are stored at 5 frames apart, the traversal should be started by using the N +5 th frame image as the first frame).
S52, when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle; preferably, the method further comprises creating a vehicle identification control object for reading the associated vehicle information in the data table;
specifically, a SetTimerbyEvent node in the UE4 is used to cycle through a data table, each time the number of times of current cycle, that is, the number of frames traversed, is recorded, when the number of frames traversed is the same as a certain initial number of frames in an array, processing a vehicle related to the initial number of frames is started, the related vehicle is used as a target to generate a corresponding simulated vehicle, the simulated vehicle invokes a corresponding model and color in a preset library according to the model and color in vehicle information to generate a simulated vehicle, wherein the preset library prestores models of a truck, a trolley, a truck, a bus and a motorcycle, prestores a plurality of colors, and the simulated vehicle invokes a database according to data in the array to generate a vehicle similar to a real vehicle.
Specifically, after the simulated vehicle is generated, the array data of the relevant vehicle is received, and the data required by the whole moving process of the vehicle after the vehicle is generated is included when the simulated vehicle is generated.
S53, determining the coordinate position of the simulated vehicle generated in the digital city scene according to the offset coordinate of the vehicle, specifically, when the virtual vehicle (A)After the vehicle A) is generated, a method for executing a timing cycle is also stated in the array, data in the array is traversed, namely, the data is executed once every other frame time, and X-axis offset coordinates of the vehicle A in the current frame image are read (the vehicle A is executed in a mode of reading the X-axis offset coordinates of the vehicle A in the current frame image: (the vehicle A is executed in a mode of reading the X-
) Y-axis offset coordinates of (A), (B), (C
) Speed of travel, location to the offset coordinate in the scene of a digital twin city: (
) With said offset coordinates (
) Generating a simulated vehicle for the central point, namely generating the simulated vehicle to the corresponding position in the digital twin city scene at what position the current frame A vehicle is in the video image, and simultaneously generating the driving speed of the current frame
Marked above the simulated vehicle. The current frame number is recorded during each execution, and the traversal is finished until the end frame number is reached. It should be noted that, a given license plate number is generated while a simulated vehicle is generated, and the license plate number moves along with the driving track of the vehicle, but the license plate number is not read in the subsequent array traversing process.
When the coordinate position is outside the preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
specifically, the data obtained by the AI recognition algorithm may have unreasonable driving trajectories, such as: when the traveling locus of the vehicle is traced on a lawn or collides with an obstacle, it is necessary to rationalize the data. When the vehicle is about to drive out of the preset driving area and collide with an obstacle, the reading of the real-time data is immediately cut off, and the simulated vehicle is moved by using the simulated data, namely, the simulated vehicle is driven towards a set lane.
Referring to fig. 3, based on the above embodiment, the step S53 specifically includes the following steps:
judging whether the offset coordinate of the simulated vehicle in the current vehicle is in a preset driving area or not;
if yes, generating a simulated vehicle according to the read data, specifically, positioning to a corresponding scene in the monitoring video stream in the digital twin city, and finding a corresponding mark point in a video picture at the scene
With the mark point
Calculating an offset coordinate for the origin to generate a simulated vehicle;
if not, the offset coordinates need to be corrected, the offset coordinates are recalculated according to the running speed set by the previous frame of image, and the simulated vehicle is continuously generated.
Preferably, the license plate number and the running speed are displayed above the generated simulated vehicle right above the simulated vehicle, please refer to fig. 7;
and S54, when traversing to the number of the ending frames, destroying the simulated vehicles which normally disappear in the preset running area, specifically, because the AI recognition algorithm is used for recognizing the vehicles, the situation that the recognition is discontinuous (namely, one vehicle is recognized after being recognized during the recognition process) exists, the same vehicle is caused to have a plurality of arrays, and one array generates one simulated vehicle, so that a plurality of simulated vehicles are generated.
Referring to fig. 4, based on the above embodiment, the S54 specifically includes the following steps:
s541, in the termination frame image, judging whether the simulated vehicle disappears out of a preset driving area, namely whether the offset coordinate of the simulated vehicle in the termination frame image is out of the driving area;
if yes, destroying the simulated vehicle;
if not, continuously calculating the set running speed of the previous frame image to obtain an offset coordinate;
s542, judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
if not, continuing to generate the simulated vehicle at the offset coordinate;
the above steps S541 to S542 are repeated.
Specifically, the method comprises the following steps: when the running track of one vehicle (the B vehicle) is interrupted, the B vehicle continues to move at the last running speed, a simulated vehicle is newly generated in a certain frame of image later, and the position of the new simulated vehicle is very close to the position of the B vehicle, the B vehicle is identified again in the following process, so that the new simulated vehicle is deleted after the array data of the new simulated vehicle is given to the B vehicle, and the B vehicle continues to move along the running track in the newly obtained data.
S6, traversing to a data table
The last frame of the frame, enter the next data table
To data sheet
The (N + 1) th frame is used as a first frame to start traversing, and the vehicles are continuously mapped in the digital twin city to generate a continuous real-time traffic simulation video;
based on the above embodiment, the S6 specifically includes the following steps:
when traversing to the data table
In oneWhen the initial frame number of the array is large, the vehicle information in the array is read, and whether the license plate number of the vehicle is in contact with the data table or not is judged
The license plate numbers of the simulated vehicles are the same;
if the data are different, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
Through the step S6, any two sections of continuous data tables can generate continuous and smooth videos in the digital twin city, when the client continuously receives the monitoring video stream, the vehicle information extraction and storage data tables are carried out after the monitoring video stream is continuously divided, meanwhile, the data in the data tables are traversed, and virtual vehicles are generated in the digital twin city, so that the real-time simulation of the monitoring video stream is realized.
Due to the fact that the data volume is too large, after the vehicle data in the monitoring video stream are extracted, the monitoring video stream needs to be deleted to reduce storage pressure of the client.
Example 2
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a system for visualizing and simulating real-time traffic data, and the device for visualizing and simulating real-time traffic data described below and the method for visualizing and simulating real-time traffic data described above may be referred to in a corresponding manner.
The system comprises the following modules:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence, and overlapping N frames of images between any two adjacent time periods;
a storage unit: the method comprises the steps of obtaining vehicle information in videos of each time period, storing the vehicle information of each vehicle into different arrays respectively, wherein at least a starting frame number and a stopping frame number of the vehicle are recorded in each array, and storing all the arrays of one time period into one data table to obtain a plurality of data tables;
a video generation unit: sequentially traversing the data table according to the time sequence, and when the data table is traversed to
When the data table is read, the traversal is started by taking the (N + 1) th frame as the first frame
The vehicle information in (1), mapping each vehicle in the digital twin city;
a video connection unit: when traversing to the data table
The last frame of the frame, enter the next data table
To data sheet
The (N + 1) th frame is used as the first frame to start traversing, the vehicles are continuously mapped in the digital twin city, and a continuous real-time traffic simulation video is generated.
It should be noted that, regarding the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Example 3
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a visualized simulation device of real-time traffic data, and a congestion simulation device based on real-time traffic data described below and a visualized simulation method of real-time traffic data described above may be referred to correspondingly.
The electronic device may include: a processor, a memory. The electronic device may also include one or more of a multimedia component, an input/output (I/O) interface, and a communication component.
The processor is used for controlling the overall operation of the electronic equipment so as to complete all or part of the steps in the visual simulation method of the real-time traffic data. The memory is used to store various types of data to support operation at the electronic device, which may include, for example, instructions for any application or method operating on the electronic device, as well as application-related data such as contact data, messaging, pictures, audio, video, and so forth. The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in a memory or transmitted through a communication component. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface provides an interface between the processor and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component is used for carrying out wired or wireless communication between the electronic equipment and other equipment. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding communication component may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components, for performing the above-mentioned visual simulation method of real-time traffic data.
In another exemplary embodiment, a computer-readable storage medium is also provided comprising program instructions which, when executed by a processor, implement the steps of the above-described method for visual simulation of real-time traffic data. For example, the computer readable storage medium may be the above-mentioned memory including program instructions executable by a processor of an electronic device to perform the above-mentioned visual simulation method of real-time traffic data.
Example 4
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a readable storage medium, and a readable storage medium described below and a visualization simulation method of real-time traffic data described above may be correspondingly referred to each other.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for visual simulation of real-time traffic data of the above-mentioned method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.