Movatterモバイル変換


[0]ホーム

URL:


CN112991742A - Visual simulation method and system for real-time traffic data - Google Patents

Visual simulation method and system for real-time traffic data
Download PDF

Info

Publication number
CN112991742A
CN112991742ACN202110427414.2ACN202110427414ACN112991742ACN 112991742 ACN112991742 ACN 112991742ACN 202110427414 ACN202110427414 ACN 202110427414ACN 112991742 ACN112991742 ACN 112991742A
Authority
CN
China
Prior art keywords
vehicle
frame
data table
real
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110427414.2A
Other languages
Chinese (zh)
Other versions
CN112991742B (en
Inventor
罗德宁
高旻
张严辞
亢林焘
何轶
郭美
段强
陶李
彭林春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jianshan Technology Co ltd
Original Assignee
Sichuan Jianshan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jianshan Technology Co ltdfiledCriticalSichuan Jianshan Technology Co ltd
Priority to CN202110427414.2ApriorityCriticalpatent/CN112991742B/en
Publication of CN112991742ApublicationCriticalpatent/CN112991742A/en
Application grantedgrantedCritical
Publication of CN112991742BpublicationCriticalpatent/CN112991742B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the technical field of intelligent traffic, in particular to a visual simulation method and a visual simulation system for real-time traffic data, which comprise the following steps: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time; dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence; acquiring vehicle information in the video of each time period, and storing all arrays of the time period into a data table; when traversing to a data table, starting traversing by taking the (N + 1) th frame as a first frame, and reading the vehicle information in the data table; and when traversing to the last frame of the data table, entering the next data table, starting traversing by taking the (N + 1) th frame of the data table as the first frame, continuously mapping the vehicle in the digital twin city, and generating a continuous real-time traffic simulation video. The invention improves the processing efficiency of the monitoring video stream, reduces the delay of visual display, ensures the continuity of the analog video and reduces the condition of data interruption.

Description

Visual simulation method and system for real-time traffic data
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a visual simulation method and system for real-time traffic data.
Background
With the increasing busy economic trade and social activity of cities in China, the urban traffic has unprecedented rapid growth, and at present, the traffic problem of cities, particularly big cities, in China is quite prominent, and the city management becomes more and more complex. In the process of managing cities, a road monitoring system plays an important role in public security and control, road monitoring points of the cities are mainly distributed at road intersections and key road sections with concentrated traffic flow and pedestrian flow, the composition, flow distribution and violation conditions of running vehicles on roads are automatically recorded all the year round, and important basic and running data are provided for traffic planning, traffic management and road maintenance departments.
While road monitoring points are continuously huge, monitoring systems face a serious problem: the method has the advantages that the method has the problems of dispersed and isolated massive videos, incomplete visual angle, undefined positions and the like, and video pictures can not effectively present real space geographic scenes and can not carry out effective and intuitive traffic information extraction, vehicle tracking and command scheduling. With the appearance of the application form of video + GIS, monitoring video stream data and three-dimensional model data of a monitoring scene are combined, the monitoring video stream can dynamically display the change of a real scene in real time, and the three-dimensional model can accurately and truly reflect the spatial characteristics of the real world so as to solve the problems.
The method of overlaying real-time monitoring video stream to the GIS scene faces some newly generated "real-time" technical problems, including: 1. because the real-time monitoring video stream data volume is large, the processing efficiency of the monitoring video stream needs to be improved to realize real-time visual display so as to reduce the delay of the visual display as much as possible; 2. because the video stream is monitored in real time, namely the continuity of the video needs to be ensured in the process of real-time visual display, and the condition of data interruption is reduced.
Disclosure of Invention
The invention aims to provide a visual simulation method and a visual simulation system for real-time traffic data, so as to improve the problems.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
a visual simulation method of real-time traffic data comprises the following steps:
building a digital twin city based on the real city;
receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence, and overlapping N frames of images between any two adjacent time periods;
the method comprises the steps of obtaining vehicle information in videos of each time period, storing the vehicle information of each vehicle into different arrays respectively, wherein at least a starting frame number and a stopping frame number of the vehicle are recorded in each array, and storing all the arrays of one time period into one data table to obtain a plurality of data tables;
sequentially traversing the data table according to the time sequence, starting traversing by taking the (N + 1) th frame as a first frame when traversing to the data table, reading the vehicle information in the data table, and mapping each vehicle in the digital twin city;
and when traversing to the last frame of the data table, entering the next data table, starting traversing by taking the (N + 1) th frame of the data table as the first frame, continuously mapping the vehicle in the digital twin city, and generating a continuous real-time traffic simulation video.
Further, the receiving a real-time road monitoring video stream corresponding to the digital twin city in real time, and dividing a preset driving area in the digital twin city according to the monitoring area of the monitoring video stream includes:
determining roads needing to be monitored in a real city;
receiving a monitoring video stream shot by the road in real time;
and positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city.
Further, the dividing the monitoring video stream into a plurality of videos in the same time period in time sequence, where N frames of images are overlapped between any two adjacent time periods, includes:
setting the duration of each time interval as T and the number of overlapped frames as N;
determining a start timestamp for each time period
Figure 455654DEST_PATH_IMAGE001
And a termination timestamp
Figure 841636DEST_PATH_IMAGE002
N frames of images are overlapped between the ending time stamp of the previous time period and the starting time stamp of the next time period;
according to the starting time stamp of each time interval
Figure 214849DEST_PATH_IMAGE001
And a termination timestamp
Figure 344479DEST_PATH_IMAGE002
The monitoring video stream is divided into a plurality of videos in the same time period.
Further, the obtaining of the vehicle information in the video of each time interval stores the vehicle information of each vehicle into different arrays respectively, each array at least records the starting frame number and the ending frame number of the vehicle, and all the arrays of one time interval are stored into one data table to obtain a plurality of data tables, including:
utilizing an AI recognition algorithm to divide a video of a period of time into a plurality of frames of images, and recognizing vehicle information in each frame of image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame of image and actual offset coordinates;
calculating the running speed of the vehicle in the current frame image by using the front and rear N frames of images of the current frame image, and tracking the running track of the vehicle;
storing vehicle information of a vehicle into an array, each array at least comprising 8 groups of data, respectively: the starting frame number when the vehicle appears, the ending frame number when the vehicle disappears, the vehicle type, the vehicle color, the license plate number, the X-axis offset coordinate, the Y-axis offset coordinate and the running speed; the X-axis offset coordinate, the Y-axis offset coordinate and the running speed all comprise data of all frame images from a starting frame number to an ending frame number;
and obtaining a plurality of arrays, storing the arrays into a data table, and recording the total frame number of the data table.
Further, the upper left corner of the video picture is used as a coordinate origin;
selecting at least two characteristic points in a video picture as a reference point and a mark point, and calculating the actual distance of a unit pixel by using the reference point and the mark point;
determining a first pixel coordinate of the vehicle relative to a coordinate origin in a current frame image;
and calculating the offset coordinate of the vehicle, namely the actual distance of the vehicle relative to the mark point by using the first pixel coordinate and the actual distance of the unit pixel.
Further, the data table is traversed sequentially according to the time sequence, and when the data table is traversed, the data table is traversed
Figure 893272DEST_PATH_IMAGE003
Taking the (N + 1) th frame as the first frameBegin traversal, read data table
Figure 450155DEST_PATH_IMAGE003
The vehicle information of (1), mapping data of each vehicle in the digital twin city, comprising:
data table for traversing a period from the N +1 th frame image
Figure 576243DEST_PATH_IMAGE003
Reading the current ergodic frame number;
when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
determining a coordinate position generated by a simulated vehicle in a digital city scene according to the offset coordinate of the vehicle, and when the coordinate position is outside a preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
and when the number of the ending frames is traversed, destroying the simulated vehicles which normally disappear in the preset driving area.
Further, when the coordinate position is outside the preset driving area, the simulated vehicle is generated after correcting the coordinate position of the simulated vehicle, including:
if the coordinate position in the current frame image is outside the preset driving area, recalculating the offset coordinate according to the driving speed set by the previous frame image, and continuously generating the simulated vehicle at the coordinate position corresponding to the offset coordinate.
Further, when traversing to the number of the termination frames, destroying the simulated vehicle which normally disappears in the preset driving area, including:
when traversing to the number of the termination frames, judging whether the simulated vehicle disappears outside a preset driving area or not;
if yes, destroying the simulated vehicle;
if not, recalculating the set running speed of the previous frame image to obtain an offset coordinate;
judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
and if not, continuing to generate the simulated vehicle at the offset coordinate.
Further, the traversal to the data table
Figure 243985DEST_PATH_IMAGE004
The last frame of the frame, enter the next data table
Figure 647284DEST_PATH_IMAGE005
To data sheet
Figure 499703DEST_PATH_IMAGE005
The (N + 1) th frame as the first frame starts traversing, vehicles are continuously mapped in the digital twin city, and a continuous real-time traffic simulation video is generated, and the method comprises the following steps:
judging whether to traverse to the data table
Figure 988453DEST_PATH_IMAGE004
The last frame of (a);
if yes, jumping to the data table
Figure 459885DEST_PATH_IMAGE006
The (N + 1) th frame is continuously traversed;
when traversing to the initial frame number of an array, reading the vehicle information in the array, and judging whether the license plate number of the vehicle is in contact with the data table
Figure 717691DEST_PATH_IMAGE004
The license plate numbers of the simulated vehicles are the same;
if the data are different, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
A visual simulation system of real-time traffic data, the system comprising:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence, and overlapping N frames of images between any two adjacent time periods;
a storage unit: the method comprises the steps of obtaining vehicle information in videos of each time period, storing the vehicle information of each vehicle into different arrays respectively, wherein at least a starting frame number and a stopping frame number of the vehicle are recorded in each array, and storing all the arrays of one time period into one data table to obtain a plurality of data tables;
a video generation unit: sequentially traversing the data table according to the time sequence, and when the data table is traversed to
Figure 743941DEST_PATH_IMAGE004
When the data table is read, the traversal is started by taking the (N + 1) th frame as the first frame
Figure 719987DEST_PATH_IMAGE004
The vehicle information in (1), mapping each vehicle in the digital twin city;
a video connection unit: when traversing to the data table
Figure 729531DEST_PATH_IMAGE003
The last frame of the frame, enter the next data table
Figure 232057DEST_PATH_IMAGE007
To data sheet
Figure 301644DEST_PATH_IMAGE008
As the first frame, starts a passAnd continuing to map the vehicles in the digital twin city to generate continuous real-time traffic simulation videos.
The invention has the beneficial effects that:
1. the method comprises the steps of receiving road monitoring video stream in real time, cutting off the received monitoring video stream when the time length of the received monitoring video stream reaches a preset time length, and dividing the received monitoring video stream into a time interval, namely continuously dividing the monitoring video stream received in real time into a plurality of time intervals according to the preset time length; meanwhile, after the video in the first time period is divided, the vehicle information in the video is extracted, the video of one vehicle is independently stored in an array, all the arrays in one time period are stored in the UE4 in a data table mode to form a data table, once the data table is formed, the data table can be traversed, and the simulated vehicle is generated and mapped in the digital twin city; after the video in the first time interval is processed, the video in the second time interval is also divided, and the second video and the subsequent third and fourth videos are processed according to the method until the transmission of the monitoring video stream is stopped.
2. According to the invention, the real-time monitoring video stream and the three-dimensional model of the monitored scene are accurately fused in real time, so that a plurality of monitoring video streams distributed at different positions and different angles can be brought into the full-space three-dimensional scene of the unified space reference, and the functions of checking, replaying, tracking a monitoring route, tracking a target and the like of the monitoring video stream at any position and at any angle can be realized. The method is applied to city management, can provide important basic and operation data for traffic planning, traffic management and road maintenance departments, and provides important technical means and evidences for quickly correcting traffic violation behaviors, quickly detecting traffic accident escape and motor vehicle robbery cases, such as: suspect vehicle deployment and control, hit-and-run vehicle tracking, incapability of identifying abnormal event accidents by human eyes, illegal vehicles and the like.
3. When the video is cut off, N frames of images are overlapped between the videos in two adjacent time periods, and the inter-frame correlation between the front and rear N frames of images and the current image is utilized to determine whether the running track of the same vehicle exists or not so as to track the vehicle, ensure that the information of the vehicle is correctly recorded in an array, and prevent the follow-up data calling from making mistakes, thereby causing the interruption of the visual simulation data and the intermittent simulation condition. N frames of images are overlapped between the videos in two adjacent time periods, so that the effect of simulating the videos is guaranteed.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic flow chart illustrating a method for visually simulating real-time traffic data according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data table traversal method described in embodiments of the present invention;
FIG. 3 is a detailed flow chart of array traversal as described in the embodiments of the present invention;
fig. 4 is a detailed flow chart of the destruction vehicle in the embodiment of the invention;
FIG. 5 is a video frame marked with landmark points and reference points according to an embodiment of the present invention;
fig. 6 is a frame image on which a travel track is drawn according to the embodiment of the present invention;
fig. 7 is a schematic diagram of the visual expression effect of the surveillance video stream in the digital twin city scene according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers or letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example 1
As shown in fig. 1, the present embodiment provides a visual simulation method of real-time traffic data, which includes the following steps:
s1, building a digital twin city based on a real city;
specifically, a map of a city is obtained, and according to the map data and a real scale, a digital twin city is built in a development platform, where the development platform includes UE4, Unity, and OSG, and preferably, a UE4 development platform is adopted in this embodiment.
S2, receiving real-time road monitoring video streams corresponding to the digital twin city in real time, dividing a preset driving area in the digital twin city according to the monitoring area of the monitoring video streams, and automatically identifying whether a simulated vehicle is generated outside the preset driving area in the visualization process so as to correct error data;
based on the above embodiment, the S2 specifically includes:
s21, determining a road needing to be monitored in a real city, and building a monitoring real-time monitoring network storage and server above the road, wherein the monitoring comprises unmanned aerial vehicle aerial photography and a road monitoring camera; preferably, the cameras are arranged at positions between the roads on the two sides, so that the vehicles on the roads on the left side and the right side are shot as clearly as possible, and the heads or the tails of the vehicles need to be shot, so that the AI identification algorithm in the subsequent steps can identify the license plate numbers conveniently, and the higher the definition is, the more accurate the identification is;
s22, receiving the monitoring video stream shot by the road in real time, specifically, receiving a transmission protocol adopted by the monitoring video stream in real time according to a client request, selecting a proper coding and decoding mode, and transmitting the real-time dynamic video stream of the camera to the client according to a frame request;
s23, positioning the digital twin city to a road in the monitoring video stream, and setting the road shot in the monitoring video stream picture as a preset driving area in the digital twin city, wherein the part of the road extending out of the monitoring video stream does not belong to the preset driving area.
S3, dividing the monitoring video stream into a plurality of videos in the same time interval according to a time sequence, and overlapping N frames of images between any two adjacent time intervals;
based on the above embodiment, the S3 specifically includes the following steps:
s31, setting the duration of each time interval as T and the number of overlapped frames as N;
s32, determining the starting time stamp of each time interval
Figure 764986DEST_PATH_IMAGE001
And a termination timestamp
Figure 578222DEST_PATH_IMAGE002
N frames of images are overlapped between the end timestamp of the previous period and the start timestamp of the next period, and it should be noted that here is repeatedStacking;
specifically, for example: the start time stamp of the first time period is
Figure 935254DEST_PATH_IMAGE009
Then the time stamp is terminated
Figure 441321DEST_PATH_IMAGE010
=
Figure 126381DEST_PATH_IMAGE009
+ T; then calculates the duration of N frames of images as
Figure 602361DEST_PATH_IMAGE011
Then the start time stamp of the second period
Figure 689266DEST_PATH_IMAGE012
=
Figure 366235DEST_PATH_IMAGE013
-
Figure 538590DEST_PATH_IMAGE011
End time stamp
Figure 818262DEST_PATH_IMAGE012
=
Figure 25252DEST_PATH_IMAGE013
+ T, and so on;
and S33, dividing the monitoring video stream into a plurality of videos in the same time interval according to the starting time stamp and the ending time stamp of each time interval.
In this embodiment, the time is counted from the time stamp of the received surveillance video stream, when the time length of the received video stream is T, the video in the first time period is divided, then the video in the first time period is processed by the subsequent steps S4-S6, at the same time, the video in the second time period is divided, after the video in the first time period is processed, the video in the second time period is processed, and so on, the video stream division process and the video processing process are performed synchronously, so as to improve the processing efficiency of the video.
S4, obtaining vehicle information in the video of each time period, respectively storing the vehicle information of each vehicle into different arrays, wherein at least the initial frame number and the final frame number of the vehicle are recorded in each array, and all the arrays of one time period are stored into one data table to obtain a plurality of data tables;
based on the above embodiment, the S4 specifically includes the following steps:
s41, segmenting a video in a period of time into a plurality of frame images by utilizing an AI (Artificial intelligence) recognition algorithm, and recognizing vehicle information in each frame image, wherein the vehicle information at least comprises the type, color and license plate number of a vehicle, pixel coordinates of the vehicle in the current frame image and actual offset coordinates;
specifically, the S41 includes the following steps:
s411, inputting a video in a time interval into an AI identification algorithm, and automatically segmenting the monitoring video stream into single-frame images by the AI identification algorithm and storing the single-frame images according to a time sequence; specifically, in the obtained single-frame image, each frame of image can be continuously obtained, or a plurality of frames of images can be obtained at intervals, and then the obtained frames of images are stored for the next identification operation;
s412, identifying all vehicles in each frame of image, and the types and colors of the vehicles, and identifying the identified vehicles;
specifically, an optimized Complecar data set is used for training a YOLO v3 algorithm, wherein the Complecar data set is a public data set which is largest in scale and most abundant in category and is used for evaluating fine recognition of vehicles at present, vehicle images are acquired by the data set through a network and monitoring equipment, wherein the number of the network images is 136726, 1716 vehicle types of 163 automobile manufacturers are covered, and the number of the monitoring images is 44481 vehicle front images which comprise 281 vehicle types. Adopting a trained YOLO v3 algorithm to perform full-frame detection on frame images, framing all recognized vehicles with rectangular frames, and recognizing the type and color of each vehicle, wherein the type of the vehicle is self-defined, and in the embodiment, the self-defined type of the vehicle recognition includes: the cross-country vehicle, the car, the minibus, the freight train, the bus, the motorcycle, according to actual need, the kind that the increase vehicle discerned.
S413, identifying the license plate number of the vehicle;
specifically, a sample set is made, a pre-trained target detection network based on a YOLO v3 algorithm is adopted based on the fusion characteristics of video images, after offline learning is carried out on the sample set, the detection of the license plate number of a vehicle in a frame image is realized, and the vehicle and the corresponding license plate number are subjected to associated marking;
s414, determining pixel coordinates of the vehicle in each frame of image, and calculating actual offset coordinates according to the pixel coordinates;
specifically, the step S414 includes the following steps:
s4141, taking the upper left corner of the video picture as a coordinate origin O (0, 0);
s4142, selecting at least two feature points in the video picture as mark points
Figure 873123DEST_PATH_IMAGE014
And a reference point
Figure 391829DEST_PATH_IMAGE015
Using said marking points
Figure 350557DEST_PATH_IMAGE016
And a reference point
Figure 146475DEST_PATH_IMAGE017
Calculating the actual distance of the unit pixel;
specifically, since the shooting angle of view of the surveillance video stream is fixed, the video frame of each frame of image is the same. It is only necessary to intercept any frame of video image from the monitoring video stream and search for feature points, which have characteristics convenient to determine, such as: a crossing point of two lane boundaries, a midpoint or a terminal point of a lane boundary, a connection point of a road sign and the ground or a connection point of a street lamp and the ground; wherein at least one characteristic point is used as a mark point, and the same position point is arranged in the scene of the digital twin city
Figure 165247DEST_PATH_IMAGE018
Marking, establishing a mapping relation between a video picture and a scene, and calculating the actual distance of a unit pixel by using a reference point and a mark point;
specifically, the actual distance of the unit pixel on the x-axis is calculated as follows:
Figure 436828DEST_PATH_IMAGE019
(1)
wherein, the
Figure 933668DEST_PATH_IMAGE020
For the actual length of a unit pixel in the x-axis direction of any pixel point P,
Figure 849672DEST_PATH_IMAGE021
is composed of
Figure 898399DEST_PATH_IMAGE016
And
Figure 532643DEST_PATH_IMAGE017
the actual distance between them on the x-axis,
Figure 833174DEST_PATH_IMAGE022
is composed of
Figure 869263DEST_PATH_IMAGE016
And
Figure 91822DEST_PATH_IMAGE017
the pixel distance in the x-axis direction between,
Figure 947782DEST_PATH_IMAGE023
the adjustment parameter is used to adjust the actual distance between the unit pixels near the lens and the unit pixels far from the lens because the video capture angle is oblique, and the adjustment parameter is used to adjust the actual distance between the unit pixels near the lens and the unit pixels far from the lens
Figure 52005DEST_PATH_IMAGE023
The problem can be solved;
specifically, the
Figure 801655DEST_PATH_IMAGE024
The method can be obtained according to the longitude and latitude of two points, and the calculation process is as follows:
Figure 333130DEST_PATH_IMAGE025
(2)
Figure 676387DEST_PATH_IMAGE026
(3)
Figure 584300DEST_PATH_IMAGE027
(4)
obtained by the formulae (2) to (4):
Figure 454036DEST_PATH_IMAGE028
; (5)
wherein, R is the radius of the earth,
Figure 156413DEST_PATH_IMAGE029
the latitude of two points is respectively shown,
Figure 986965DEST_PATH_IMAGE030
is the difference in longitude between two points.
In particular, the regulating parameter
Figure 292045DEST_PATH_IMAGE031
The calculation process of (2) is as follows:
Figure 891653DEST_PATH_IMAGE032
(6)
wherein H is the total length of pixels on the y-axis,
Figure 764932DEST_PATH_IMAGE033
the pixel coordinate of any pixel point P on the y axis;
similarly, the actual distance of the unit pixel on the y axis is calculated by:
Figure 82780DEST_PATH_IMAGE034
(7)
wherein, the
Figure 191551DEST_PATH_IMAGE035
The actual length of the unit pixel in the y-axis direction for any pixel point P,
Figure 911245DEST_PATH_IMAGE036
is composed of
Figure 955424DEST_PATH_IMAGE016
And
Figure 619624DEST_PATH_IMAGE017
the actual distance between them on the y-axis,
Figure 407451DEST_PATH_IMAGE037
is composed of
Figure 981652DEST_PATH_IMAGE016
And
Figure 196733DEST_PATH_IMAGE017
the pixel distance in the y-axis direction between,
Figure 348229DEST_PATH_IMAGE031
the distance is an adjusting parameter;
referring to fig. 5 of the drawings, a schematic diagram of a display device,
Figure 939747DEST_PATH_IMAGE016
and
Figure 368454DEST_PATH_IMAGE017
respectively, the mark points selected in the embodiment
Figure 613491DEST_PATH_IMAGE016
And a reference point
Figure 393228DEST_PATH_IMAGE017
The pixel coordinates thereof with respect to the origin of coordinates O are respectively
Figure 522858DEST_PATH_IMAGE016
(1045,475) and
Figure 806072DEST_PATH_IMAGE017
(26,653)。
s4143, determining first pixel coordinates of the vehicle relative to a coordinate origin in the current frame image
Figure 744379DEST_PATH_IMAGE038
S4144, calculating an offset coordinate of the vehicle using the first pixel coordinate and the actual distance of the unit pixel
Figure 745833DEST_PATH_IMAGE039
I.e. vehicle relative to a landmark
Figure 679154DEST_PATH_IMAGE040
The actual distance of (2) is calculated as follows:
Figure 816874DEST_PATH_IMAGE041
; (8)
wherein, the
Figure 669293DEST_PATH_IMAGE031
The distance is a regulating parameter.
S42, calculating the running speed of the vehicle in the current frame image by using the front and back N frame images of the current frame image, and tracking the running track of the vehicle;
specifically, the S42 includes the following steps:
s421, respectively obtaining offset coordinates of the vehicle in the front frame image and the back frame image of the current frame;
in this embodiment, the offset coordinate of the first 5 frames (F-5) and the offset coordinate of the last 5 frames (F + 5) of the current frame F are obtained;
s422, by using a displacement-velocity formula:
Figure 158043DEST_PATH_IMAGE042
calculating the running speed of the vehicle in the current frame image, wherein v represents the running speed of the vehicle in the current frame image, x represents the distance between the offset coordinate of the first 5 frames (F-5) and the offset coordinate of the last 5 frames (F + 5), and t represents the time interval between the two frame images of the first 5 frames and the last 5 frames;
s423, using continuous first N frame images, (it should be noted that if the interval is stored at intervals, the number of interval frames is 5, the continuous first N/5 frame images are used for tracking, and the same applies to the first and second N frames), using continuous first 5 frame images to track the driving track of the vehicle by using the STRCF algorithm, and drawing the driving track of each vehicle in the frame images, please refer to fig. 6, in which not only the driving track of each vehicle is drawn, but also the driving speed and the offset coordinates of the vehicle are drawn;
specifically, the calculation process of the STRCF algorithm is as follows:
Figure 629475DEST_PATH_IMAGE043
; (9)
wherein E is the total number of the feature maps, the feature maps are the feature maps of the monitoring video stream fused with the convolution nerve features and the HOG nerve features,
Figure 11915DEST_PATH_IMAGE044
t is the total number of frames,
Figure 176180DEST_PATH_IMAGE045
for the t-th frame of the image,
Figure 886647DEST_PATH_IMAGE046
Figure 161771DEST_PATH_IMAGE047
Figure 664296DEST_PATH_IMAGE048
for the e-th feature map in the t-th frame image,
Figure 468304DEST_PATH_IMAGE049
showing the learned correlation filter of the e-th feature map feature,
Figure 931647DEST_PATH_IMAGE050
the label of the image of the t-th frame, w is a space regularization weight function, f is a correlation filter learned by the t-th frame,
Figure 869516DEST_PATH_IMAGE051
the learned correlation filter for the t-1 th frame,
Figure 101914DEST_PATH_IMAGE052
(ii) a μ denotes the temporal regularization factor, operator · denotes the Hadamard product,. denotes the convolution operation,
Figure 342402DEST_PATH_IMAGE053
representing the modulus of the vector.
S43, storing the vehicle information of one vehicle into one array, wherein each array at least comprises 8 groups of data, which are respectively as follows: starting frame number when vehicle appears, ending frame number when vehicle disappears, vehicle type, vehicle color, license plate number, X-axis offset coordinate: (
Figure 293041DEST_PATH_IMAGE054
) Y-axis offset coordinates of (A), (B), (C
Figure 34601DEST_PATH_IMAGE055
) Driving speed; wherein, the X-axis offset coordinate, the Y-axis offset coordinate and the running speed are all included from the beginningData of all frame images between the frame number and the termination frame number;
s44, obtaining a plurality of arrays, storing the arrays into a data table, recording the total frame number of the data table, obtaining a csv file storing vehicle information after processing a monitoring video stream by an AI (analog to digital) recognition algorithm, and storing the csv file into the UE4 in the form of the data table;
s45, repeating the steps, storing the vehicle information in each time interval into a data table to obtain a plurality of data tables D, wherein the data tables D are a continuous generation process, after the first data table is generated, the step S5 is started, the data in the first data table are read to perform visual simulation in the digital twin city, and therefore, the time difference between the generated simulated video and the video in the real road in the digital twin generation is the video processing time + T in one time interval, and the speed of rendering one frame of image can be increased to 0.2/frame by utilizing the step S4 in the embodiment.
The specific embodiment is as follows: setting the time length T =2s of a video in a period, setting the frame rate of a surveillance video stream to 30 frames/s, then 60 frames of images exist in a video, storing one frame of image every 5 frames, then storing 12 frames of images, calculating the time required for processing a video to be 2.4s, and setting the time difference between a simulated video and a video in a real road to be 4.4 s.
S5, traversing the data table in sequence according to the time sequence, and traversing to the data table
Figure 855926DEST_PATH_IMAGE004
When the data table is read, the traversal is started by taking the (N + 1) th frame as the first frame
Figure 532895DEST_PATH_IMAGE004
The vehicle information in (1), mapping each vehicle in the digital twin city;
referring to fig. 2, based on the above embodiment, the step S5 specifically includes the following steps:
s51, traversing a data table of a period from the N +1 th frame image
Figure 829884DEST_PATH_IMAGE004
Reading the current ergodic frame number;
as shown in step S42, the driving speed of the vehicle in the current frame image is calculated by using the first N frame image and the last N frame image, and the driving track of the vehicle is tracked by using consecutive first N frames, so that two adjacent videos need to be separated by N frame images. During the visualization process, to avoid generating overlapped videos, the data in one data table is traversed by using the N +1 th frame image as the first frame (if the images are stored at 5 frames apart, the traversal should be started by using the N +5 th frame image as the first frame).
S52, when traversing to the initial frame number of an array, reading the vehicle information in the array, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle; preferably, the method further comprises creating a vehicle identification control object for reading the associated vehicle information in the data table;
specifically, a SetTimerbyEvent node in the UE4 is used to cycle through a data table, each time the number of times of current cycle, that is, the number of frames traversed, is recorded, when the number of frames traversed is the same as a certain initial number of frames in an array, processing a vehicle related to the initial number of frames is started, the related vehicle is used as a target to generate a corresponding simulated vehicle, the simulated vehicle invokes a corresponding model and color in a preset library according to the model and color in vehicle information to generate a simulated vehicle, wherein the preset library prestores models of a truck, a trolley, a truck, a bus and a motorcycle, prestores a plurality of colors, and the simulated vehicle invokes a database according to data in the array to generate a vehicle similar to a real vehicle.
Specifically, after the simulated vehicle is generated, the array data of the relevant vehicle is received, and the data required by the whole moving process of the vehicle after the vehicle is generated is included when the simulated vehicle is generated.
S53, determining the coordinate position of the simulated vehicle generated in the digital city scene according to the offset coordinate of the vehicle, specifically, when the virtual vehicle (A)After the vehicle A) is generated, a method for executing a timing cycle is also stated in the array, data in the array is traversed, namely, the data is executed once every other frame time, and X-axis offset coordinates of the vehicle A in the current frame image are read (the vehicle A is executed in a mode of reading the X-axis offset coordinates of the vehicle A in the current frame image: (the vehicle A is executed in a mode of reading the X-
Figure 984922DEST_PATH_IMAGE056
) Y-axis offset coordinates of (A), (B), (C
Figure 191913DEST_PATH_IMAGE057
) Speed of travel, location to the offset coordinate in the scene of a digital twin city: (
Figure 774204DEST_PATH_IMAGE058
) With said offset coordinates (
Figure 292910DEST_PATH_IMAGE058
) Generating a simulated vehicle for the central point, namely generating the simulated vehicle to the corresponding position in the digital twin city scene at what position the current frame A vehicle is in the video image, and simultaneously generating the driving speed of the current frame
Figure 251638DEST_PATH_IMAGE059
Marked above the simulated vehicle. The current frame number is recorded during each execution, and the traversal is finished until the end frame number is reached. It should be noted that, a given license plate number is generated while a simulated vehicle is generated, and the license plate number moves along with the driving track of the vehicle, but the license plate number is not read in the subsequent array traversing process.
When the coordinate position is outside the preset driving area, correcting the coordinate position of the simulated vehicle to generate the simulated vehicle;
specifically, the data obtained by the AI recognition algorithm may have unreasonable driving trajectories, such as: when the traveling locus of the vehicle is traced on a lawn or collides with an obstacle, it is necessary to rationalize the data. When the vehicle is about to drive out of the preset driving area and collide with an obstacle, the reading of the real-time data is immediately cut off, and the simulated vehicle is moved by using the simulated data, namely, the simulated vehicle is driven towards a set lane.
Referring to fig. 3, based on the above embodiment, the step S53 specifically includes the following steps:
judging whether the offset coordinate of the simulated vehicle in the current vehicle is in a preset driving area or not;
if yes, generating a simulated vehicle according to the read data, specifically, positioning to a corresponding scene in the monitoring video stream in the digital twin city, and finding a corresponding mark point in a video picture at the scene
Figure 313135DEST_PATH_IMAGE018
With the mark point
Figure 459470DEST_PATH_IMAGE018
Calculating an offset coordinate for the origin to generate a simulated vehicle;
if not, the offset coordinates need to be corrected, the offset coordinates are recalculated according to the running speed set by the previous frame of image, and the simulated vehicle is continuously generated.
Preferably, the license plate number and the running speed are displayed above the generated simulated vehicle right above the simulated vehicle, please refer to fig. 7;
and S54, when traversing to the number of the ending frames, destroying the simulated vehicles which normally disappear in the preset running area, specifically, because the AI recognition algorithm is used for recognizing the vehicles, the situation that the recognition is discontinuous (namely, one vehicle is recognized after being recognized during the recognition process) exists, the same vehicle is caused to have a plurality of arrays, and one array generates one simulated vehicle, so that a plurality of simulated vehicles are generated.
Referring to fig. 4, based on the above embodiment, the S54 specifically includes the following steps:
s541, in the termination frame image, judging whether the simulated vehicle disappears out of a preset driving area, namely whether the offset coordinate of the simulated vehicle in the termination frame image is out of the driving area;
if yes, destroying the simulated vehicle;
if not, continuously calculating the set running speed of the previous frame image to obtain an offset coordinate;
s542, judging whether a new simulated vehicle is generated in the current frame, wherein the distance between the new simulated vehicle and the offset coordinate of the simulated vehicle is less than or equal to 1 m;
if so, giving the data of the new simulated vehicle to the simulated vehicle, continuously reading the data in the array, continuously generating the simulated vehicle according to the offset coordinates in the array, and destroying the new simulated vehicle;
if not, continuing to generate the simulated vehicle at the offset coordinate;
the above steps S541 to S542 are repeated.
Specifically, the method comprises the following steps: when the running track of one vehicle (the B vehicle) is interrupted, the B vehicle continues to move at the last running speed, a simulated vehicle is newly generated in a certain frame of image later, and the position of the new simulated vehicle is very close to the position of the B vehicle, the B vehicle is identified again in the following process, so that the new simulated vehicle is deleted after the array data of the new simulated vehicle is given to the B vehicle, and the B vehicle continues to move along the running track in the newly obtained data.
S6, traversing to a data table
Figure 340839DEST_PATH_IMAGE004
The last frame of the frame, enter the next data table
Figure 837679DEST_PATH_IMAGE005
To data sheet
Figure 878316DEST_PATH_IMAGE006
The (N + 1) th frame is used as a first frame to start traversing, and the vehicles are continuously mapped in the digital twin city to generate a continuous real-time traffic simulation video;
based on the above embodiment, the S6 specifically includes the following steps:
when traversing to the data table
Figure 67989DEST_PATH_IMAGE060
In oneWhen the initial frame number of the array is large, the vehicle information in the array is read, and whether the license plate number of the vehicle is in contact with the data table or not is judged
Figure 702233DEST_PATH_IMAGE061
The license plate numbers of the simulated vehicles are the same;
if the data are different, generating a corresponding simulated vehicle, and transmitting the array to the simulated vehicle;
if the vehicle number is the same as the preset vehicle number, the same vehicle is assigned to the simulated vehicle, and the simulated vehicle is generated continuously in the digital twin city.
Through the step S6, any two sections of continuous data tables can generate continuous and smooth videos in the digital twin city, when the client continuously receives the monitoring video stream, the vehicle information extraction and storage data tables are carried out after the monitoring video stream is continuously divided, meanwhile, the data in the data tables are traversed, and virtual vehicles are generated in the digital twin city, so that the real-time simulation of the monitoring video stream is realized.
Due to the fact that the data volume is too large, after the vehicle data in the monitoring video stream are extracted, the monitoring video stream needs to be deleted to reduce storage pressure of the client.
Example 2
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a system for visualizing and simulating real-time traffic data, and the device for visualizing and simulating real-time traffic data described below and the method for visualizing and simulating real-time traffic data described above may be referred to in a corresponding manner.
The system comprises the following modules:
a modeling unit: building a digital twin city based on the real city;
a receiving unit: receiving real-time road monitoring video streams corresponding to the digital twin cities in real time, and dividing preset running areas in the digital twin cities according to the monitoring areas of the monitoring video streams;
video stream dividing unit: dividing the monitoring video stream into a plurality of videos in the same time period according to the time sequence, and overlapping N frames of images between any two adjacent time periods;
a storage unit: the method comprises the steps of obtaining vehicle information in videos of each time period, storing the vehicle information of each vehicle into different arrays respectively, wherein at least a starting frame number and a stopping frame number of the vehicle are recorded in each array, and storing all the arrays of one time period into one data table to obtain a plurality of data tables;
a video generation unit: sequentially traversing the data table according to the time sequence, and when the data table is traversed to
Figure 2764DEST_PATH_IMAGE003
When the data table is read, the traversal is started by taking the (N + 1) th frame as the first frame
Figure 897908DEST_PATH_IMAGE003
The vehicle information in (1), mapping each vehicle in the digital twin city;
a video connection unit: when traversing to the data table
Figure 258482DEST_PATH_IMAGE003
The last frame of the frame, enter the next data table
Figure 380022DEST_PATH_IMAGE007
To data sheet
Figure 218665DEST_PATH_IMAGE007
The (N + 1) th frame is used as the first frame to start traversing, the vehicles are continuously mapped in the digital twin city, and a continuous real-time traffic simulation video is generated.
It should be noted that, regarding the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Example 3
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a visualized simulation device of real-time traffic data, and a congestion simulation device based on real-time traffic data described below and a visualized simulation method of real-time traffic data described above may be referred to correspondingly.
The electronic device may include: a processor, a memory. The electronic device may also include one or more of a multimedia component, an input/output (I/O) interface, and a communication component.
The processor is used for controlling the overall operation of the electronic equipment so as to complete all or part of the steps in the visual simulation method of the real-time traffic data. The memory is used to store various types of data to support operation at the electronic device, which may include, for example, instructions for any application or method operating on the electronic device, as well as application-related data such as contact data, messaging, pictures, audio, video, and so forth. The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in a memory or transmitted through a communication component. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface provides an interface between the processor and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component is used for carrying out wired or wireless communication between the electronic equipment and other equipment. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding communication component may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components, for performing the above-mentioned visual simulation method of real-time traffic data.
In another exemplary embodiment, a computer-readable storage medium is also provided comprising program instructions which, when executed by a processor, implement the steps of the above-described method for visual simulation of real-time traffic data. For example, the computer readable storage medium may be the above-mentioned memory including program instructions executable by a processor of an electronic device to perform the above-mentioned visual simulation method of real-time traffic data.
Example 4
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a readable storage medium, and a readable storage medium described below and a visualization simulation method of real-time traffic data described above may be correspondingly referred to each other.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for visual simulation of real-time traffic data of the above-mentioned method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

Translated fromChinese
1.一种实时交通数据的可视化仿真方法,其特征在于,包括:1. a visual simulation method of real-time traffic data, is characterized in that, comprises:基于真实城市,搭建数字孪生城市;Build digital twin cities based on real cities;实时接收所述数字孪生城市对应的实时道路监控视频流,根据监控视频流的监控区域在数字孪生城市中划分预设行驶区域;Receive the real-time road monitoring video stream corresponding to the digital twin city in real time, and divide the preset driving area in the digital twin city according to the monitoring area of the monitoring video stream;将所述监控视频流按时间顺序划分为若干个相同时段的视频,任一两个相邻的时段之间重叠N帧图像;dividing the monitoring video stream into several videos of the same time period in time sequence, and overlapping N frames of images between any two adjacent time periods;获取每个时段视频中的车辆信息,将每个车辆的车辆信息分别存入不同的数组中,每个数组中至少记录有车辆的起始帧数和终止帧数,一个时段的所有数组存入一个数据表中,得到若干个数据表;Obtain the vehicle information in the video of each period, and store the vehicle information of each vehicle in different arrays. Each array records at least the number of starting frames and ending frames of the vehicle, and all arrays of a period are stored in In one data table, several data tables are obtained;按时间顺序依次遍历数据表,当遍历至数据表
Figure 164523DEST_PATH_IMAGE001
时,以第N+1帧作为第一帧开始遍历,读取数据表
Figure 663638DEST_PATH_IMAGE001
中的车辆信息,将每个车辆映射在数字孪生城市中;Traverse the data table in chronological order, when traversing to the data table
Figure 164523DEST_PATH_IMAGE001
, start the traversal with the N+1th frame as the first frame, and read the data table
Figure 663638DEST_PATH_IMAGE001
The vehicle information in the digital twin city is mapped to each vehicle;当遍历至数据表
Figure 25349DEST_PATH_IMAGE001
的最后一帧时,进入下一个数据表
Figure 392745DEST_PATH_IMAGE002
,以数据表
Figure 789092DEST_PATH_IMAGE002
的第N+1帧作为第一帧开始遍历,继续将车辆映射在数字孪生城市中,生成连续的实时交通模拟视频。
When traversing to the data table
Figure 25349DEST_PATH_IMAGE001
of the last frame, go to the next data table
Figure 392745DEST_PATH_IMAGE002
, with the data table
Figure 789092DEST_PATH_IMAGE002
The N+1th frame of 1 starts to traverse as the first frame, and continues to map the vehicle in the digital twin city to generate a continuous real-time traffic simulation video.
2.根据权利要求1所述的实时交通数据的可视化仿真方法,其特征在于,所述实时接收所述数字孪生城市对应的实时道路监控视频流,根据监控视频流的监控区域在数字孪生城市中划分预设行驶区域,包括:2. the visualization simulation method of real-time traffic data according to claim 1, is characterized in that, described in real-time receiving the corresponding real-time road monitoring video stream of described digital twin city, according to the monitoring area of monitoring video stream in digital twin city Divide preset driving areas, including:确定真实城市中需要监控的道路;Identify roads that need to be monitored in real cities;接收所述道路实时拍摄的监控视频流;receiving a surveillance video stream captured in real time by the road;将数字孪生城市定位至监控视频流中的道路,并在数字孪生城市中将监控视频流画面中拍摄到的道路设置为预设行驶区域。The digital twin city is positioned to the road in the surveillance video stream, and the road captured in the surveillance video stream is set as the preset driving area in the digital twin city.3.根据权利要求1所述的实时交通数据的可视化仿真方法,其特征在于,所述将所述监控视频流按时间顺序划分为若干个相同时段的视频,任一两个相邻的时段之间重叠N帧图像,包括:3. The visual simulation method of real-time traffic data according to claim 1, is characterized in that, the described monitoring video stream is divided into several videos of the same time period in chronological order, and any two adjacent time periods are the same. Overlap N frames of images, including:设定每个时段的时长为T,和重叠帧数为N;Set the duration of each period as T, and the number of overlapping frames as N;确定每个时段的起始时间戳
Figure 193528DEST_PATH_IMAGE003
和终止时间戳
Figure 917902DEST_PATH_IMAGE004
,上一时段的终止时间戳与下一时段的起始时间戳之间重叠有N帧图像;
Determine the start timestamp of each period
Figure 193528DEST_PATH_IMAGE003
and end timestamp
Figure 917902DEST_PATH_IMAGE004
, N frames of images are overlapped between the end timestamp of the previous period and the start timestamp of the next period;
根据所述每个时段的起始时间戳
Figure 698776DEST_PATH_IMAGE003
和终止时间戳
Figure 838377DEST_PATH_IMAGE004
将监控视频流划分成若干个相同时段的视频。
according to the starting timestamp of each period
Figure 698776DEST_PATH_IMAGE003
and end timestamp
Figure 838377DEST_PATH_IMAGE004
Divide the surveillance video stream into several videos of the same time period.
4.根据权利要求1所述的实时交通数据的可视化仿真方法,其特征在于,所述获取每个时段视频中的车辆信息,将每个车辆的车辆信息分别存入不同的数组中,每个数组中至少记录有车辆的起始帧数和终止帧数,一个时段的所有数组存入一个数据表中,包括:4. the visual simulation method of real-time traffic data according to claim 1, is characterized in that, described obtaining the vehicle information in each time period video, the vehicle information of each vehicle is respectively stored in different arrays, each The array records at least the starting and ending frames of the vehicle, and all arrays in a period are stored in a data table, including:利用AI识别算法将一个时段的视频分割成若干帧图像,在每帧图像中识别出车辆信息,所述车辆信息至少包括车辆的种类、颜色、车牌号码、车辆在当前帧图像中的像素坐标、实际的偏移坐标;The AI recognition algorithm is used to divide the video of a period into several frames of images, and the vehicle information is identified in each frame of image. The vehicle information at least includes the type, color, license plate number of the vehicle, the pixel coordinates of the vehicle in the current frame image, the actual offset coordinates;利用当前帧图像的第前、后N帧图像计算车辆在当前帧图像的行驶速度,并追踪车辆的行驶轨迹;Calculate the driving speed of the vehicle in the current frame image by using the first and last N frame images of the current frame image, and track the driving trajectory of the vehicle;将一个车辆的车辆信息存入一个数组中,每一数组中至少包含8组数据,分别为:车辆出现时的起始帧数、消失时的终止帧数、车辆类型、车辆颜色、车牌号码、X轴偏移坐标、Y轴偏移坐标、行驶速度;其中,所述X轴偏移坐标、Y轴偏移坐标、行驶速度均包含从起始帧数到终止帧数之间所有帧图像的数据;Store the vehicle information of a vehicle into an array, each array contains at least 8 sets of data, namely: the number of starting frames when the vehicle appears, the number of ending frames when it disappears, vehicle type, vehicle color, license plate number, The X-axis offset coordinates, the Y-axis offset coordinates, and the traveling speed; wherein, the X-axis offset coordinates, the Y-axis offset coordinates, and the traveling speed all include the data of all frame images from the starting frame number to the ending frame number. data;得到若干个数组,将所述若干个数组存入一个数据表中,记录数据表的总帧数。Obtain several arrays, store the several arrays in a data table, and record the total number of frames in the data table.5.根据权利要求4所述的实时交通数据的可视化仿真方法,其特征在于,所述车辆在当前帧图像中的像素坐标、实际的偏移坐标的计算过程为:5. The visualization simulation method of real-time traffic data according to claim 4, is characterized in that, the calculation process of the pixel coordinates of the vehicle in the current frame image, the actual offset coordinates are:将视频画面的左上角作为坐标原点;Use the upper left corner of the video screen as the coordinate origin;在视频画面中选择至少两个特征点作为参照点和标志点,利用所述参照点和标志点计算单位像素的实际距离;Select at least two feature points in the video picture as the reference point and the marker point, and use the reference point and the marker point to calculate the actual distance of the unit pixel;确定所述车辆在当前帧图像中相对于坐标原点的第一像素坐标;determining the first pixel coordinates of the vehicle relative to the coordinate origin in the current frame image;利用所述第一像素坐标和单位像素的实际距离,计算车辆的偏移坐标,即车辆相对于标志点的实际距离。Using the first pixel coordinates and the actual distance of the unit pixel, the offset coordinates of the vehicle are calculated, that is, the actual distance of the vehicle relative to the marker point.6.根据权利要求1所述的实时交通数据的可视化仿真方法,其特征在于,所述按时间顺序依次遍历数据表,当遍历至数据表
Figure 413715DEST_PATH_IMAGE001
时,以第N+1帧作为第一帧开始遍历,读取数据表
Figure 750018DEST_PATH_IMAGE001
中的车辆信息,将每个车辆的数据映射在数字孪生城市中,包括:
6. The visual simulation method of real-time traffic data according to claim 1, wherein the traversing the data table in time sequence, when traversing to the data table
Figure 413715DEST_PATH_IMAGE001
, start traversing with the N+1th frame as the first frame, and read the data table
Figure 750018DEST_PATH_IMAGE001
The vehicle information in the digital twin city is mapped to the data of each vehicle, including:
从第N+1帧图像开始遍历一个时段的数据表
Figure 944370DEST_PATH_IMAGE001
,读取当前遍历的帧数;
Traverse the data table of a period from the N+1 frame image
Figure 944370DEST_PATH_IMAGE001
, read the number of frames currently traversed;
当遍历至一个数组的起始帧数时,读取数组中的车辆信息,生成一个对应的模拟车辆,将所述数组传给所述模拟车辆;When traversing to the starting frame number of an array, read the vehicle information in the array, generate a corresponding simulated vehicle, and transmit the array to the simulated vehicle;根据数组的偏移坐标确定模拟车辆在数字城市场景中生成的坐标位置,当所述坐标位置在预设行驶区域之外时,纠正模拟车辆的坐标位置后生成模拟车辆;Determine the coordinate position generated by the simulated vehicle in the digital city scene according to the offset coordinates of the array, and when the coordinate position is outside the preset driving area, correct the coordinate position of the simulated vehicle to generate the simulated vehicle;遍历至终止帧数时,销毁正常消失在预设行驶区域的模拟车辆。When traversing to the end frame number, destroy the simulated vehicle that normally disappears in the preset driving area.
7.根据权利要求6所述的实时交通数据的可视化仿真方法,其特征在于,所述当所述坐标位置在预设行驶区域之外时,纠正模拟车辆的坐标位置后生成模拟车辆,包括:7. The visual simulation method of real-time traffic data according to claim 6, wherein when the coordinate position is outside the preset driving area, the simulated vehicle is generated after correcting the coordinate position of the simulated vehicle, comprising:若当前帧图像中的坐标位置在预设行驶区域之外,则根据上一帧图像既定的行驶速度重新计算偏移坐标,在所述偏移坐标对应的坐标位置继续生成模拟车辆。If the coordinate position in the current frame image is outside the preset driving area, the offset coordinates are recalculated according to the predetermined driving speed of the previous frame image, and the simulated vehicle is continued to be generated at the coordinate position corresponding to the offset coordinate.8.根据权利要求6所述的实时交通数据的可视化仿真方法,其特征在于,所述遍历至终止帧数时,销毁正常消失在预设行驶区域的模拟车辆,包括:8. The visualized simulation method of real-time traffic data according to claim 6, wherein when the traversal reaches the termination frame number, destroying the simulated vehicle that normally disappears in the preset driving area, comprising:遍历至终止帧数时,判断所述模拟车辆消失时是否在预设行驶区域之外;When traversing to the end frame number, determine whether the simulated vehicle is outside the preset driving area when it disappears;若是,则销毁所述模拟车辆;if so, destroying the simulated vehicle;若否,则以上一帧图像既定的行驶速度重新计算得到偏移坐标;If not, the offset coordinates are obtained by recalculating the predetermined driving speed of the previous frame image;判断当前帧是否生成有新模拟车辆,且所述新模拟车辆与模拟车辆的偏移坐标之间的距离≤1m;Determine whether a new simulated vehicle is generated in the current frame, and the distance between the new simulated vehicle and the offset coordinates of the simulated vehicle is less than or equal to 1m;若是,则将所述新模拟车辆的数据赋予所述模拟车辆,继续读取数组中的数据,按照所述数组中的偏移坐标继续生成所述模拟车辆,并销毁所述新模拟车辆;If so, assign the data of the new simulated vehicle to the simulated vehicle, continue to read the data in the array, continue to generate the simulated vehicle according to the offset coordinates in the array, and destroy the new simulated vehicle;若否,则在所述偏移坐标继续生成所述模拟车辆。If not, continue generating the simulated vehicle at the offset coordinates.9.根据权利要求1所述的实时交通数据的可视化仿真方法,其特征在于,所述当遍历至数据表
Figure 49729DEST_PATH_IMAGE001
的最后一帧时,进入下一个数据表
Figure 795968DEST_PATH_IMAGE002
,以数据表
Figure 9781DEST_PATH_IMAGE002
的第N+1帧作为第一帧开始遍历,继续将车辆映射在数字孪生城市中,包括:
9. The visual simulation method of real-time traffic data according to claim 1, wherein, when traversing to a data table
Figure 49729DEST_PATH_IMAGE001
of the last frame, go to the next data table
Figure 795968DEST_PATH_IMAGE002
, with the data table
Figure 9781DEST_PATH_IMAGE002
The N+1th frame starts to traverse as the first frame, and continues to map the vehicle in the digital twin city, including:
当遍历至数据表
Figure 866879DEST_PATH_IMAGE002
中一个数组的起始帧数时,读取数组中的车辆信息,判断车辆的车牌号码是否与数据表
Figure 92323DEST_PATH_IMAGE001
中模拟车辆的车牌号码相同;
When traversing to the data table
Figure 866879DEST_PATH_IMAGE002
When the starting frame number of an array in the , read the vehicle information in the array, and judge whether the license plate number of the vehicle is the same as that of the data table.
Figure 92323DEST_PATH_IMAGE001
The license plate number of the simulated vehicle is the same;
若不同,生成一个对应的模拟车辆,将所述数组传给所述模拟车辆;If different, generate a corresponding simulated vehicle, and pass the array to the simulated vehicle;若相同,则为同一个车辆,将所述车辆的数组赋予所述模拟车辆,继续在数字孪生城市中生成所述模拟车辆。If it is the same, it is the same vehicle, assign the vehicle array to the simulated vehicle, and continue to generate the simulated vehicle in the digital twin city.
10.一种实时交通数据的可视化仿真系统,其特征在于,包括:10. A visualization simulation system for real-time traffic data, comprising:建模单元:基于真实城市,搭建数字孪生城市;Modeling unit: build a digital twin city based on a real city;接收单元:实时接收所述数字孪生城市对应的实时道路监控视频流,根据监控视频流的监控区域在数字孪生城市中划分预设行驶区域;Receiving unit: receive the real-time road monitoring video stream corresponding to the digital twin city in real time, and divide the preset driving area in the digital twin city according to the monitoring area of the monitoring video stream;视频流划分单元:将所述监控视频流按时间顺序划分为若干个相同时段的视频,任一两个相邻的时段之间重叠N帧图像;Video stream dividing unit: divide the monitoring video stream into several videos of the same time period in time sequence, and overlap N frames of images between any two adjacent time periods;存储单元:获取每个时段视频中的车辆信息,将每个车辆的车辆信息分别存入不同的数组中,每个数组中至少记录有车辆的起始帧数和终止帧数,一个时段的所有数组存入一个数据表中,得到若干个数据表;Storage unit: Obtain the vehicle information in the video of each period, and store the vehicle information of each vehicle in different arrays. Each array records at least the starting and ending frames of the vehicle. The array is stored in a data table, and several data tables are obtained;视频生成单元:按时间顺序依次遍历数据表,当遍历至数据表
Figure 150409DEST_PATH_IMAGE001
时,以第N+1帧作为第一帧开始遍历,读取数据表
Figure 195726DEST_PATH_IMAGE001
中的车辆信息,将每个车辆映射在数字孪生城市中;
Video generation unit: traverse the data table in chronological order, when traversing to the data table
Figure 150409DEST_PATH_IMAGE001
, start traversing with the N+1th frame as the first frame, and read the data table
Figure 195726DEST_PATH_IMAGE001
The vehicle information in the digital twin city is mapped to each vehicle;
视频连接单元:当遍历至数据表
Figure 122093DEST_PATH_IMAGE001
的最后一帧时,进入下一个数据表
Figure 828144DEST_PATH_IMAGE002
,以数据表
Figure 181764DEST_PATH_IMAGE002
的第N+1帧作为第一帧开始遍历,继续将车辆映射在数字孪生城市中,生成连续的实时交通模拟视频。
Video Connection Unit: When traversing to the datasheet
Figure 122093DEST_PATH_IMAGE001
of the last frame, go to the next data table
Figure 828144DEST_PATH_IMAGE002
, with the data table
Figure 181764DEST_PATH_IMAGE002
The N+1th frame of 1 starts to traverse as the first frame, and continues to map the vehicle in the digital twin city to generate a continuous real-time traffic simulation video.
CN202110427414.2A2021-04-212021-04-21 A visualization simulation method and system for real-time traffic dataActiveCN112991742B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110427414.2ACN112991742B (en)2021-04-212021-04-21 A visualization simulation method and system for real-time traffic data

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110427414.2ACN112991742B (en)2021-04-212021-04-21 A visualization simulation method and system for real-time traffic data

Publications (2)

Publication NumberPublication Date
CN112991742Atrue CN112991742A (en)2021-06-18
CN112991742B CN112991742B (en)2021-08-20

Family

ID=76341408

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110427414.2AActiveCN112991742B (en)2021-04-212021-04-21 A visualization simulation method and system for real-time traffic data

Country Status (1)

CountryLink
CN (1)CN112991742B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114529875A (en)*2022-04-242022-05-24浙江这里飞科技有限公司Method and device for detecting illegal parking vehicle, electronic equipment and storage medium
CN114780666A (en)*2022-06-232022-07-22四川见山科技有限责任公司Road label optimization method and system in digital twin city
CN114782588A (en)*2022-06-232022-07-22四川见山科技有限责任公司Real-time drawing method and system for road names in digital twin city
CN114943940A (en)*2022-07-262022-08-26山东金宇信息科技集团有限公司Method, equipment and storage medium for visually monitoring vehicles in tunnel
CN114966695A (en)*2022-05-112022-08-30南京慧尔视软件科技有限公司Digital twin image processing method, device, equipment and medium of radar
CN115841757A (en)*2022-11-092023-03-24智道网联科技(北京)有限公司Data processing method and device, electronic equipment and storage medium
CN116543586A (en)*2023-07-032023-08-04深圳市视想科技有限公司Intelligent public transportation information display method and display equipment based on digital twinning
CN117579761A (en)*2024-01-162024-02-20苏州映赛智能科技有限公司Error display control method of road digital twin model
CN119416984A (en)*2025-01-062025-02-11南京星火互联信息科技有限公司 Visualization method and system based on digital twin

Citations (48)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2006295702A (en)*2005-04-132006-10-26Canon Inc Image processing apparatus and method, and image processing program and storage medium
CN101025862A (en)*2007-02-122007-08-29吉林大学Video based mixed traffic flow parameter detecting method
CN101136140A (en)*2006-08-292008-03-05亿阳信通股份有限公司Roads traffic speed calculating and matching method and system
US20090295818A1 (en)*2008-05-282009-12-03Hailin JinMethod and Apparatus for Rendering Images With and Without Radially Symmetric Distortions
CN101866429A (en)*2010-06-012010-10-20中国科学院计算技术研究所 Training method and recognition method for multi-moving target action behavior recognition
CN101924953A (en)*2010-09-032010-12-22南京农业大学 An easy matching method based on fiducial points
US20110024611A1 (en)*2009-07-292011-02-03Ut-Battelle, LlcCalibration method for video and radiation imagers
CN102307320A (en)*2011-08-112012-01-04江苏亿通高科技股份有限公司Piracy tracing watermarking method applicable to streaming media environment
CN102636378A (en)*2012-04-102012-08-15无锡国盛精密模具有限公司Machine vision orientation-based biochip microarrayer and positioning method thereof
WO2013005244A1 (en)*2011-07-012013-01-10株式会社ベイビッグThree-dimensional relative coordinate measuring device and method
CN103092924A (en)*2012-12-282013-05-08深圳先进技术研究院Real-time data driven traffic data three-dimensional visualized analysis system and method
CN103577412A (en)*2012-07-202014-02-12永泰软件有限公司High-definition video based traffic incident frame tagging method and system
CN105354548A (en)*2015-10-302016-02-24武汉大学Surveillance video pedestrian re-recognition method based on ImageNet retrieval
CN106204560A (en)*2016-07-022016-12-07上海大学 Automatic Calibration Method of Colony Picker
CN106604059A (en)*2016-12-282017-04-26深圳Tcl新技术有限公司Data delivering method and system
CN107526504A (en)*2017-08-102017-12-29广州酷狗计算机科技有限公司Method and device, terminal and the storage medium that image is shown
US20180203959A1 (en)*2017-01-132018-07-19Fedem Technology AsVirtual sensor for virtual asset
CN108694237A (en)*2018-05-112018-10-23东峡大通(北京)管理咨询有限公司Handle method, equipment, visualization system and the user terminal of vehicle position data
CN108805073A (en)*2018-06-062018-11-13合肥嘉仕诚能源科技有限公司A kind of safety monitoring dynamic object optimization track lock method and system
CN109063587A (en)*2018-07-112018-12-21北京大米科技有限公司data processing method, storage medium and electronic equipment
CN109147341A (en)*2018-09-142019-01-04杭州数梦工场科技有限公司Violation vehicle detection method and device
CN109359507A (en)*2018-08-242019-02-19南京理工大学 A rapid construction method of workshop personnel digital twin model
CN109615862A (en)*2018-12-292019-04-12南京市城市与交通规划设计研究院股份有限公司Road vehicle movement of traffic state parameter dynamic acquisition method and device
CN109887027A (en)*2019-01-032019-06-14杭州电子科技大学 An Image-Based Mobile Robot Localization Method
CN110033479A (en)*2019-04-152019-07-19四川九洲视讯科技有限责任公司Traffic flow parameter real-time detection method based on Traffic Surveillance Video
CN110060524A (en)*2019-04-302019-07-26广东小天才科技有限公司Robot-assisted reading method and reading robot
CN110111565A (en)*2019-04-182019-08-09中国电子科技网络信息安全有限公司A kind of people's vehicle flowrate System and method for flowed down based on real-time video
CN110398233A (en)*2019-09-042019-11-01浙江中光新能源科技有限公司A kind of heliostat field coordinate mapping system and method based on unmanned plane
US20190382003A1 (en)*2018-06-132019-12-19Toyota Jidosha Kabushiki KaishaCollision avoidance for a connected vehicle based on a digital behavioral twin
CN110753218A (en)*2019-08-212020-02-04佳都新太科技股份有限公司Digital twinning system and method and computer equipment
CN110826415A (en)*2019-10-112020-02-21上海眼控科技股份有限公司Method and device for re-identifying vehicles in scene image
CN110992425A (en)*2019-12-112020-04-10北京大豪科技股份有限公司Image calibration method and device, electronic equipment and storage medium
CN111275960A (en)*2018-12-052020-06-12杭州海康威视系统技术有限公司Traffic road condition analysis method, system and camera
CN111400405A (en)*2020-03-302020-07-10兰州交通大学Monitoring video data parallel processing system and method based on distribution
US20200234590A1 (en)*2019-01-182020-07-23Johnson Controls Technology CompanySmart parking lot system
JP2020113106A (en)*2019-01-152020-07-27株式会社デンソーStolen vehicle tracking system
US20200250944A1 (en)*2019-02-022020-08-06Delta Thermal, Inc.System and Methods For Computerized Safety and Security
CN111505916A (en)*2020-05-252020-08-07江苏迪盛智能科技有限公司Laser direct imaging apparatus
CN111565303A (en)*2020-05-292020-08-21深圳市易链信息技术有限公司Video monitoring method, system and readable storage medium based on fog calculation and deep learning
CN111565286A (en)*2020-07-142020-08-21之江实验室Video static background synthesis method and device, electronic equipment and storage medium
CN111724485A (en)*2020-06-112020-09-29浙江商汤科技开发有限公司Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN111737887A (en)*2020-08-212020-10-02北京理工大学 A Virtual Stage Based on Parallel Simulation
CN111815710A (en)*2020-05-282020-10-23北京易航远智科技有限公司Automatic calibration method for fisheye camera
CN112163348A (en)*2020-10-242021-01-01腾讯科技(深圳)有限公司 Detection method, device, simulation method, vehicle and medium for abnormal road surface
CN112351190A (en)*2019-08-072021-02-09福特全球技术公司Digital twin monitoring system and method
KR102217870B1 (en)*2020-10-282021-02-19주식회사 예향엔지니어링Traffic managing system using digital twin technology
CN112378953A (en)*2020-11-252021-02-19清华大学Observation device for liquid drops colliding with metal bottom plate and application of observation device
CN112529856A (en)*2020-11-302021-03-19华为技术有限公司Method for determining the position of an operating object, robot and automation system

Patent Citations (48)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2006295702A (en)*2005-04-132006-10-26Canon Inc Image processing apparatus and method, and image processing program and storage medium
CN101136140A (en)*2006-08-292008-03-05亿阳信通股份有限公司Roads traffic speed calculating and matching method and system
CN101025862A (en)*2007-02-122007-08-29吉林大学Video based mixed traffic flow parameter detecting method
US20090295818A1 (en)*2008-05-282009-12-03Hailin JinMethod and Apparatus for Rendering Images With and Without Radially Symmetric Distortions
US20110024611A1 (en)*2009-07-292011-02-03Ut-Battelle, LlcCalibration method for video and radiation imagers
CN101866429A (en)*2010-06-012010-10-20中国科学院计算技术研究所 Training method and recognition method for multi-moving target action behavior recognition
CN101924953A (en)*2010-09-032010-12-22南京农业大学 An easy matching method based on fiducial points
WO2013005244A1 (en)*2011-07-012013-01-10株式会社ベイビッグThree-dimensional relative coordinate measuring device and method
CN102307320A (en)*2011-08-112012-01-04江苏亿通高科技股份有限公司Piracy tracing watermarking method applicable to streaming media environment
CN102636378A (en)*2012-04-102012-08-15无锡国盛精密模具有限公司Machine vision orientation-based biochip microarrayer and positioning method thereof
CN103577412A (en)*2012-07-202014-02-12永泰软件有限公司High-definition video based traffic incident frame tagging method and system
CN103092924A (en)*2012-12-282013-05-08深圳先进技术研究院Real-time data driven traffic data three-dimensional visualized analysis system and method
CN105354548A (en)*2015-10-302016-02-24武汉大学Surveillance video pedestrian re-recognition method based on ImageNet retrieval
CN106204560A (en)*2016-07-022016-12-07上海大学 Automatic Calibration Method of Colony Picker
CN106604059A (en)*2016-12-282017-04-26深圳Tcl新技术有限公司Data delivering method and system
US20180203959A1 (en)*2017-01-132018-07-19Fedem Technology AsVirtual sensor for virtual asset
CN107526504A (en)*2017-08-102017-12-29广州酷狗计算机科技有限公司Method and device, terminal and the storage medium that image is shown
CN108694237A (en)*2018-05-112018-10-23东峡大通(北京)管理咨询有限公司Handle method, equipment, visualization system and the user terminal of vehicle position data
CN108805073A (en)*2018-06-062018-11-13合肥嘉仕诚能源科技有限公司A kind of safety monitoring dynamic object optimization track lock method and system
US20190382003A1 (en)*2018-06-132019-12-19Toyota Jidosha Kabushiki KaishaCollision avoidance for a connected vehicle based on a digital behavioral twin
CN109063587A (en)*2018-07-112018-12-21北京大米科技有限公司data processing method, storage medium and electronic equipment
CN109359507A (en)*2018-08-242019-02-19南京理工大学 A rapid construction method of workshop personnel digital twin model
CN109147341A (en)*2018-09-142019-01-04杭州数梦工场科技有限公司Violation vehicle detection method and device
CN111275960A (en)*2018-12-052020-06-12杭州海康威视系统技术有限公司Traffic road condition analysis method, system and camera
CN109615862A (en)*2018-12-292019-04-12南京市城市与交通规划设计研究院股份有限公司Road vehicle movement of traffic state parameter dynamic acquisition method and device
CN109887027A (en)*2019-01-032019-06-14杭州电子科技大学 An Image-Based Mobile Robot Localization Method
JP2020113106A (en)*2019-01-152020-07-27株式会社デンソーStolen vehicle tracking system
US20200234590A1 (en)*2019-01-182020-07-23Johnson Controls Technology CompanySmart parking lot system
US20200250944A1 (en)*2019-02-022020-08-06Delta Thermal, Inc.System and Methods For Computerized Safety and Security
CN110033479A (en)*2019-04-152019-07-19四川九洲视讯科技有限责任公司Traffic flow parameter real-time detection method based on Traffic Surveillance Video
CN110111565A (en)*2019-04-182019-08-09中国电子科技网络信息安全有限公司A kind of people's vehicle flowrate System and method for flowed down based on real-time video
CN110060524A (en)*2019-04-302019-07-26广东小天才科技有限公司Robot-assisted reading method and reading robot
CN112351190A (en)*2019-08-072021-02-09福特全球技术公司Digital twin monitoring system and method
CN110753218A (en)*2019-08-212020-02-04佳都新太科技股份有限公司Digital twinning system and method and computer equipment
CN110398233A (en)*2019-09-042019-11-01浙江中光新能源科技有限公司A kind of heliostat field coordinate mapping system and method based on unmanned plane
CN110826415A (en)*2019-10-112020-02-21上海眼控科技股份有限公司Method and device for re-identifying vehicles in scene image
CN110992425A (en)*2019-12-112020-04-10北京大豪科技股份有限公司Image calibration method and device, electronic equipment and storage medium
CN111400405A (en)*2020-03-302020-07-10兰州交通大学Monitoring video data parallel processing system and method based on distribution
CN111505916A (en)*2020-05-252020-08-07江苏迪盛智能科技有限公司Laser direct imaging apparatus
CN111815710A (en)*2020-05-282020-10-23北京易航远智科技有限公司Automatic calibration method for fisheye camera
CN111565303A (en)*2020-05-292020-08-21深圳市易链信息技术有限公司Video monitoring method, system and readable storage medium based on fog calculation and deep learning
CN111724485A (en)*2020-06-112020-09-29浙江商汤科技开发有限公司Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN111565286A (en)*2020-07-142020-08-21之江实验室Video static background synthesis method and device, electronic equipment and storage medium
CN111737887A (en)*2020-08-212020-10-02北京理工大学 A Virtual Stage Based on Parallel Simulation
CN112163348A (en)*2020-10-242021-01-01腾讯科技(深圳)有限公司 Detection method, device, simulation method, vehicle and medium for abnormal road surface
KR102217870B1 (en)*2020-10-282021-02-19주식회사 예향엔지니어링Traffic managing system using digital twin technology
CN112378953A (en)*2020-11-252021-02-19清华大学Observation device for liquid drops colliding with metal bottom plate and application of observation device
CN112529856A (en)*2020-11-302021-03-19华为技术有限公司Method for determining the position of an operating object, robot and automation system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LIEVEN RAES等: ""DUET: A Framework for Building Secure and Trusted Digital Twins of Smart Cities"", 《 IEEE INTERNET COMPUTING ( EARLY ACCESS )》*
WLADIMIRHOFMANN等: ""Implementation of an IoT- and Cloud-based Digital Twin for Real-Time Decision Support in Port Operations"", 《IFAC PAPERSONLINE》*
吴柯维: ""基于视频智能分析技术的交通数字孪生应用"", 《第十五届中国智能交通年会科技论文集(2)》*
胡旦华等: ""基于监控视频的变电站运动目标提取"", 《江西科学》*

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114529875A (en)*2022-04-242022-05-24浙江这里飞科技有限公司Method and device for detecting illegal parking vehicle, electronic equipment and storage medium
CN114966695B (en)*2022-05-112023-11-14南京慧尔视软件科技有限公司Digital twin image processing method, device, equipment and medium for radar
CN114966695A (en)*2022-05-112022-08-30南京慧尔视软件科技有限公司Digital twin image processing method, device, equipment and medium of radar
CN114780666A (en)*2022-06-232022-07-22四川见山科技有限责任公司Road label optimization method and system in digital twin city
CN114782588A (en)*2022-06-232022-07-22四川见山科技有限责任公司Real-time drawing method and system for road names in digital twin city
CN114782588B (en)*2022-06-232022-09-27四川见山科技有限责任公司 A real-time drawing method and system for road names in a digital twin city
CN114943940A (en)*2022-07-262022-08-26山东金宇信息科技集团有限公司Method, equipment and storage medium for visually monitoring vehicles in tunnel
CN115841757A (en)*2022-11-092023-03-24智道网联科技(北京)有限公司Data processing method and device, electronic equipment and storage medium
CN116543586A (en)*2023-07-032023-08-04深圳市视想科技有限公司Intelligent public transportation information display method and display equipment based on digital twinning
CN116543586B (en)*2023-07-032023-09-08深圳市视想科技有限公司Intelligent public transportation information display method and display equipment based on digital twinning
CN117579761A (en)*2024-01-162024-02-20苏州映赛智能科技有限公司Error display control method of road digital twin model
CN117579761B (en)*2024-01-162024-03-26苏州映赛智能科技有限公司Error display control method of road digital twin model
CN119416984A (en)*2025-01-062025-02-11南京星火互联信息科技有限公司 Visualization method and system based on digital twin

Also Published As

Publication numberPublication date
CN112991742B (en)2021-08-20

Similar Documents

PublicationPublication DateTitle
CN112991742B (en) A visualization simulation method and system for real-time traffic data
CN112990114B (en)Traffic data visualization simulation method and system based on AI identification
US10621495B1 (en)Method for training and refining an artificial intelligence
US11941887B2 (en)Scenario recreation through object detection and 3D visualization in a multi-sensor environment
CN109064755B (en)Path identification method based on four-dimensional real-scene traffic simulation road condition perception management system
US11468646B1 (en)Method for rendering 2D and 3D data within a 3D virtual environment
JP7278414B2 (en) Digital restoration method, apparatus and system for traffic roads
CN113033030A (en)Congestion simulation method and system based on real road scene
US12148219B2 (en)Method, apparatus, and computing device for lane recognition
CN111967393A (en)Helmet wearing detection method based on improved YOLOv4
CA3179005A1 (en)Artificial intelligence and computer vision powered driving-performance assessment
CN104506804A (en)Device and method for monitoring abnormal behavior of motor vehicle on expressway
KR20210052031A (en)Deep Learning based Traffic Flow Analysis Method and System
CN108898839B (en)Real-time dynamic traffic information data system and updating method thereof
CN105046948A (en)System and method of monitoring illegal traffic parking in yellow grid line area
JP2019154027A (en)Method and device for setting parameter for video monitoring system, and video monitoring system
CN118609356B (en)Method for real-time monitoring and predicting traffic jam induced by specific sudden aggregation event
CN113177508B (en)Method, device and equipment for processing driving information
CN114913470B (en)Event detection method and device
CN204884166U (en)Regional violating regulations parking monitoring devices is stopped to traffic taboo
JP6405606B2 (en) Image processing apparatus, image processing method, and image processing program
CN115272909B (en) An unmanned driving test monitoring system
WO2023022136A1 (en)Method, apparatus and system for adaptively controlling traffic signals
CN119682739B (en) Driving information processing method and device based on smart helmet
TW201322199A (en)Vehicle-locating system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
PE01Entry into force of the registration of the contract for pledge of patent right
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:A Visual Simulation Method and System for Real Time Traffic Data

Effective date of registration:20231019

Granted publication date:20210820

Pledgee:Chengdu Rural Commercial Bank Co.,Ltd. Tianfu New Area Branch

Pledgor:SICHUAN JIANSHAN TECHNOLOGY CO.,LTD.

Registration number:Y2023510000232

PC01Cancellation of the registration of the contract for pledge of patent right
PC01Cancellation of the registration of the contract for pledge of patent right

Granted publication date:20210820

Pledgee:Chengdu Rural Commercial Bank Co.,Ltd. Tianfu New Area Branch

Pledgor:SICHUAN JIANSHAN TECHNOLOGY CO.,LTD.

Registration number:Y2023510000232

PE01Entry into force of the registration of the contract for pledge of patent right
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:A visualization simulation method and system for real-time traffic data

Granted publication date:20210820

Pledgee:Chengdu Rural Commercial Bank Co.,Ltd. Tianfu New Area Branch

Pledgor:SICHUAN JIANSHAN TECHNOLOGY CO.,LTD.

Registration number:Y2024980059224


[8]ページ先頭

©2009-2025 Movatter.jp