Disclosure of Invention
In view of the foregoing, it is desirable to provide a vehicle behavior analysis method, apparatus, computer device, and storage medium, which solve at least one of the problems in the prior art.
In a first aspect, an embodiment of the present application is thus implemented, and provides a vehicle behavior analysis method, including the steps of:
acquiring a target scene image and acquiring marker image data in the target scene image;
constructing a mapping model from a pixel coordinate system to a world coordinate system based on the marker image data;
acquiring angular point features of a plurality of parking spaces, and constructing a parking lot map based on the angular point features and the mapping model;
Acquiring video data to be detected, and determining the position coordinates of at least one target vehicle in each frame of video frame when detecting that the video data to be detected contains the target vehicle;
sequentially mapping the position coordinates into the corresponding parking lot map of each frame through the mapping model to generate a bird's eye view;
And determining position change state information of the target vehicle in the parking lot map based on the aerial view, and analyzing the behavior of the target vehicle based on the position change state information to obtain a behavior analysis result.
In an embodiment, the constructing a mapping model from a pixel coordinate system to a world coordinate system based on the marker image data includes:
calibrating the target shooting device based on the marker image data;
Obtaining output parameters of the calibrated target shooting device, wherein the output parameters at least comprise an external reference matrix, an internal reference matrix and distortion parameters;
And constructing a mapping model from the pixel coordinate system to the world coordinate system based on the external reference matrix, the internal reference matrix and the distortion parameters.
In an embodiment, after the mapping the position coordinates to the corresponding parking lot map in turn, the mapping includes:
Determining whether the target vehicle is in a stuck state;
If not, acquiring a history matching result between the vehicle characteristics of the target vehicle and the vehicle characteristics of the tracked vehicle;
and when the history matching result meets a preset matching condition, acquiring vehicle information, and adding the vehicle information to the corresponding parking lot map of each frame.
In an embodiment, the determining whether the target vehicle is in a stay state includes:
Acquiring vehicle characteristics of the target vehicle and a pre-stored tracked vehicle record, wherein the tracked vehicle record comprises vehicle characteristics of at least one tracked vehicle;
matching vehicle features of the target vehicle with vehicle features of the tracked vehicle;
when the matching is consistent, acquiring the latest vehicle tracking parameters of the target vehicle, wherein the latest vehicle tracking parameters comprise vehicle speed;
Determining whether the vehicle speed is greater than a preset speed threshold;
if not, determining that the target vehicle is in a detention state.
In an embodiment, the history matching result includes a number of matching successes, and when the history matching result meets a preset matching condition, acquiring vehicle information, and adding the vehicle information to the parking lot map corresponding to each frame, including:
and when the successful times of matching between the vehicle features of the target vehicle and the tracked vehicle are continuous and are larger than a first preset time threshold, determining that the vehicle features of the target vehicle and the tracked vehicle are the same vehicle, and adding the vehicle information mark into the corresponding parking lot map of each frame.
In an embodiment, after the adding the vehicle information label to the corresponding parking lot map of each frame, the method further includes:
and when the successful times of matching between the vehicle features of the target vehicle and the tracked vehicle features of the tracked vehicle are larger than a second preset times threshold, removing the vehicle information in the previously recorded second preset times +1 frame, wherein the second preset times threshold is larger than the first preset times threshold.
In an embodiment, the analyzing the behavior of the target vehicle based on the location change status information to obtain a behavior analysis result includes:
determining a running track and a current pose of the target vehicle based on the position change state information;
Determining whether the target vehicle has a stopping violation based on the driving track and the current pose of the target vehicle;
If yes, outputting alarm information.
In a second aspect, there is provided a vehicle behavior analysis apparatus including:
the marker image data acquisition unit is used for acquiring a target scene image and acquiring marker image data in the target scene image;
a mapping model construction unit for constructing a mapping model from a pixel coordinate system to a world coordinate system based on the marker image data;
the map construction unit is used for collecting corner features of a plurality of parking spaces and constructing a parking lot map based on the corner features and the mapping model;
The coordinate information determining unit is used for acquiring video data to be detected, and determining the position coordinates of the target vehicle in each video frame when at least one target vehicle exists in the video data to be detected;
the aerial view generating unit is used for sequentially mapping the position coordinates into the corresponding parking lot map of each frame through the mapping model so as to generate an aerial view;
And the behavior analysis unit is used for determining the position change state information of the target vehicle in the parking lot map based on the aerial view, and analyzing the behavior of the target vehicle based on the position change state information to obtain a behavior analysis result.
In a third aspect, there is provided a computer device comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, when executing the computer readable instructions, implementing the steps of the vehicle behaviour analysis method as described above.
In a fourth aspect, a readable storage medium is provided, storing computer readable instructions that when executed by a processor implement the steps of a vehicle behaviour analysis method as described above.
The vehicle behavior analysis method, the device, the computer equipment and the storage medium, and the method comprises the following steps: acquiring a target scene image and acquiring marker image data in the target scene image; constructing a mapping model from a pixel coordinate system to a world coordinate system based on the marker image data; acquiring angular point features of a plurality of parking spaces, and constructing a parking lot map based on the angular point features and the mapping model; acquiring video data to be detected, and determining the position coordinates of at least one target vehicle in each frame of video frame when detecting that the video data to be detected contains the target vehicle; sequentially mapping the position coordinates into the corresponding parking lot map of each frame through the mapping model to generate a bird's eye view; and determining position change state information of the target vehicle in the parking lot map based on the aerial view, and analyzing the behavior of the target vehicle based on the position change state information to obtain a behavior analysis result. According to the embodiment of the application, the shooting device is calibrated, the mapping model between the pixel coordinate system and the world coordinate system is constructed based on the calibrated shooting device, the coordinates of the vehicle are mapped into the parking lot map based on the mapping model to obtain the aerial view, and the position state change of the vehicle and the idle state change of the parking space can be intuitively obtained based on the aerial view, so that the behavior of the vehicle can be analyzed, the number of the parking spaces and the number of the vehicle can be accurately counted, the linkage of a plurality of cameras is not needed, the cost of products and the cost of products can be reduced, and the accuracy of parking space management and control can be improved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In one embodiment, as shown in fig. 1, a flow for implementing a vehicle behavior analysis method is provided, including the following steps:
in step S110, a target scene image is acquired, and marker image data located in the target scene image is acquired;
In the embodiment of the application, the marker can be a calibration plate, and the calibration plate can be provided with calibration patterns, such as two-dimensional codes, circular rings, checkerboards and the like, which are used as characteristic points, namely sampling points, for subsequent image calibration. It can be understood that the calibration plate can be paved on a lane, and target lane scene images are acquired through a preset shooting device.
The shooting device may be a device such as a monocular camera, a video camera, etc. that can be used to collect the lane scene image, where the lane scene image may include the marker image.
It will be appreciated that the camera may be positioned above the lane, for example above the centre of the marker, where the position of the lane scene image containing the marker may be captured.
In step S120, a mapping model from the pixel coordinate system to the world coordinate system is constructed based on the marker image data;
In the embodiment of the application, based on the marker image data, output parameters of a shooting device are calibrated through an opencv library calib d and an aro module to realize distortion correction of the marker image, then a mapping model from a pixel coordinate system to a world coordinate system can be constructed based on the calibrated output parameters, wherein the mapping model comprises but is not limited to a conversion equation, a comparison table, a matrix model and the like, optionally, a pixel coordinate system and a world coordinate system can be respectively constructed, first coordinate information of feature points of the marker after distortion correction in the pixel coordinate system and second coordinate information of the feature points in the world coordinate system are determined, and a conversion relation between the first coordinate information and the second coordinate information is determined to serve as a mapping model from the pixel coordinate system to the world coordinate system.
In step S130, collecting corner features of a plurality of parking spaces, and constructing a parking lot map based on the corner features and a mapping model;
in the embodiment of the application, the corner feature refers to an intersection point between any two boundaries of the parking space.
As an implementation manner of the present application, an image capturing device may be installed around each parking space, and the image capturing device may be a fisheye camera, and exemplary, 1 fisheye camera may be installed at the front left, front right, rear left and rear right of the parking space, and images may be respectively captured by the 4 fisheye cameras, so as to identify the parking space to identify the corner feature of the parking space.
As an implementation manner of the application, an image of the whole lane can be acquired by the shooting device, the image can comprise a plurality of parking spaces, and each parking space in the image is identified by identifying the parking space identification model trained in advance so as to identify the corner characteristics of each parking space. The parking space recognition model may be a deep learning model, such as a target detection model (Regions with CNN features, RCNN). Or a corner detection algorithm, such as a Harris corner detection algorithm, can also be adopted for corner feature extraction.
In the embodiment of the application, after the corner features of the parking space are obtained, the map building and the parking space positioning can be performed based on the corner features and the mapping model to construct the parking lot map, wherein the parking lot map can comprise lane information, parking area information and the like.
In step S140, obtaining video data to be detected, and determining a position coordinate of a target vehicle in each frame of video frame when at least one target vehicle in the video data to be detected is detected;
In the embodiment of the application, the shooting device can acquire the video data of the target area of the parking lot in real time to serve as the video data to be detected, and detect the video data to be detected based on a pre-constructed 3D target detection algorithm to determine whether the target vehicle exists, and if the target vehicle exists, a pre-trained coordinate prediction model can be adopted to predict the position coordinate of the target vehicle.
The 3D target detection algorithm may use a 3D detection algorithm such as deep3dbox, pointRCNN, pointPillars.
In the embodiment of the application, the coordinate prediction model may include a vehicle detection model and a 3D frame regression module, specifically, based on video data to be detected, the video data is segmented into multiple frames of video frame images, a target vehicle image in the multiple frames of video frame images is input into a pre-trained vehicle detection model to obtain a 2D detection frame of the target vehicle, and then a target vehicle image area in the 2D detection frame is input into the pre-trained 3D frame regression model to obtain 8 corner coordinates of the 3D detection frame to obtain coordinate information of the chassis area.
In another embodiment, the coordinate values of the 3D detection frame may be predicted according to the deep3dbox algorithm, the deep3dbox algorithm may use the 2D detection frame of the target vehicle predicted by the pre-trained vehicle detection model as input, then predict the size and direction of the 3D frame, and then estimate the actual pixel coordinate values of the 8 corner points of the 3D frame by using the least square method according to the combination of geometric constraints. And further obtaining coordinate values of four corner points of the chassis needed by us. The real world coordinate value of the 3D detection frame of the vehicle can be obtained according to a deep3dbox algorithm, specifically, the deep3dbox algorithm predicts the residual error size of the length, width and height of the 3D frame and predicts two attributes of the direction angle of the 3D frame, then the real size of the 3D frame is calculated through the statistical size difference between the residual error size and the target category, the 3D angular point coordinates under the world coordinate system are directly converted through the length, width and height of the target to obtain relative coordinates (the origin of the coordinate system is built on the center point of the chassis), and then the representation of the pixel coordinates corresponding to the 3D angular point coordinates is obtained through a mapping model; the deep3dbox algorithm determines the position of the center point of the target, and solves the problem through an assumption construction equation of 'the 2D detection frame of the target is closely attached to the 3D detection frame'. And constraining the representation of the 3D frame pixel coordinates according to the assumed geometric constraint condition, thereby establishing an overdetermined equation, and finally adopting a least square method to solve the overdetermined equation to estimate the pixel coordinate value of the center point of the 3D frame.
In another embodiment, the coordinate values of the 3D detection frame may be relative coordinate values. The relative coordinate value of the 3D detection frame can be obtained according to the actual coordinate of the upper left point of the 2D detection frame, the width of the 2D detection frame, the height of the 2D detection frame and the actual coordinate value of the 3D detection frame, which are obtained by the vehicle detection recognition model. Specifically, for example, the actual coordinates of the upper left point of the 2D detection frame are (a 0, b 0), the width of the 2D detection frame is W, the height of the 2D detection frame is H, the actual coordinates of the upper left point of the lower bottom surface of the 3D detection frame are (a 1, b 1), the actual coordinates of the lower left point of the lower bottom surface of the 3D detection frame are (a 2, b 2), the relative coordinates of the upper left point of the lower bottom surface of the 3D detection frame are ((a 1-a 0)/W, (b 1-b 0)/H), and the relative coordinates of the lower left point of the lower bottom surface of the 3D detection frame are ((a 2-b 0)/W, (a 2-b 0)/H).
Specifically, for example, if the relative coordinates of the upper left point of the lower bottom surface of the 3D detection frame ((a 1-a 0)/W, (b 1-b 0)/H) and the relative coordinates of the lower left point of the lower bottom surface of the 3D detection frame ((a 2-a 0)/W, (b 2-b 0)/H) are obtained by directly inputting the vehicle image region in the 2D detection frame output from the vehicle detection recognition model into the 3D frame regression model, the actual coordinates of the other points of the lower bottom surface of the 3D detection frame may be calculated reversely from the actual coordinates of the upper left point of the 2D detection frame (a 0, b 0), the width of the 2D detection frame (W), the height of the 2D detection frame (H), the actual coordinates of the upper left point of the lower bottom surface of the 3D detection frame (x 1, y 1), the actual coordinates of the lower left point of the lower bottom surface of the 3D detection frame (x 2, y 2) and the like. And then the region of the lower bottom surface of the 3D detection frame, namely the chassis region, is obtained according to the actual coordinates of other points of the lower bottom surface of the 3D detection frame, and then the coordinate information of each corner point of the chassis region can be obtained based on the actual coordinates of each point of the 3D detection frame.
In step S150, mapping the position coordinates to each corresponding frame of parking lot map in turn through a mapping model to generate a bird' S eye view;
In the embodiment of the application, the parking lot map can comprise a plurality of frames of parking lot map images, after the position coordinates of the target vehicle in the video frame are acquired, the parking lot map images can be mapped into the parking lot map images of the corresponding frame based on a pre-built mapping model, the process can be a dynamic change process, namely, each time the position coordinates of the target vehicle in a video frame are acquired, namely, the target vehicle is mapped into one frame of parking lot map, thus the parking lot map with a plurality of frames of mapping is obtained, and then the world coordinates under the parking lot map are converted onto canvas with a self-defined size, wherein the world coordinates of the vehicle are considered to be only on the ground, so that Z=0 is not considered, the world coordinates are converted into a canvas pixel coordinate system through a Matplotlib library transformation_point function, and finally, each frame of canvas image is spliced into a bird's eye view video. Or each mapped frame of parking lot map can be spliced into a bird's eye view through an IPM algorithm (INVERSE PERSPECTIVE MAPPING, inverse perspective transformation algorithm).
In step S160, position change state information of the target vehicle in the parking lot map is determined based on the bird' S eye view, and behavior of the target vehicle is analyzed based on the position change state information to obtain a behavior analysis result.
In the embodiment of the application, after the aerial view of the parking lot map is constructed, the position of the target vehicle in each frame of image in the aerial view can be determined, the position is connected, the position change state of the target vehicle can be obtained, the running track of the target vehicle can be obtained based on the position change state, and based on the running track, whether the target vehicle is in the actions of staying, entering a parking space, leaving the parking space, stopping illegally and the like can be determined so as to execute corresponding processing, for example, the number of the idle parking spaces can be updated when entering the parking space and leaving the parking space, if the illegal parking and the staying exist, on-site workers can be notified to process, road congestion is avoided, or the parking of other parking spaces is prevented from being influenced.
The vehicle behavior analysis method comprises the following steps: acquiring a target scene image and acquiring marker image data in the target scene image; constructing a mapping model from a pixel coordinate system to a world coordinate system based on the marker image data; acquiring corner features of a plurality of parking spaces, and constructing a parking lot map based on the corner features; acquiring video data to be detected, and determining the position coordinates of the target vehicle in each frame of video frame when at least one target vehicle exists in the video data to be detected; sequentially mapping the position coordinates into each corresponding frame of parking lot map through a mapping model to generate a bird's eye view; and determining the position change state information of the target vehicle in the parking lot map based on the aerial view, and analyzing the behavior of the target vehicle based on the position change state information to obtain a behavior analysis result. According to the embodiment of the application, the shooting device is calibrated, the mapping model between the pixel coordinate system and the world coordinate system is constructed based on the calibrated shooting device, the coordinates of the vehicle are mapped into the parking lot map based on the mapping model to obtain the aerial view, and the position state change of the vehicle and the idle state change of the parking space can be intuitively obtained based on the aerial view, so that the behavior of the vehicle can be analyzed, the number of the parking spaces and the number of the vehicle can be accurately counted, the linkage of a plurality of cameras is not needed, the cost of products and the cost of products can be reduced, and the accuracy of parking space management and control can be improved.
In one embodiment of the present application, constructing a mapping model from a pixel coordinate system to a world coordinate system based on marker image data includes:
Calibrating the target shooting device based on the marker image data;
Obtaining output parameters of the calibrated target shooting device, wherein the output parameters at least comprise an external reference matrix, an internal reference matrix and distortion parameters;
Based on the extrinsic, intrinsic and distortion parameters, a mapping model is constructed from the pixel coordinate system to the world coordinate system.
Optionally, by acquiring the marker image data, a plurality of feature points of the marker pattern in the marker image data can be extracted by calling FINDCIRCKESGRID functions in the OpenCV graphic library to obtain a feature point set, and based on the feature point set, calibrateCamera functions in the OpenCV graphic library can be called to perform linear optimization on feature points in the feature point set to obtain calibrated camera output parameters. The output parameters at least comprise an external reference matrix, an internal reference matrix and distortion parameters, the marker image is subjected to distortion correction through the external reference matrix, the internal reference matrix and the distortion parameters, coordinates of each feature point after the distortion correction are obtained to serve as image coordinates of the feature point in a pixel coordinate system, world coordinates of the feature point of the marker pattern in a world coordinate system can be obtained, and a conversion equation between the image coordinates and the world coordinates can be constructed based on the image coordinates and the world coordinates to serve as a mapping model.
The mapping model may be understood as that after the image coordinates are obtained, the image coordinates may be input into the mapping model for conversion, and then the world coordinates in the world coordinate system may be obtained.
In an embodiment of the present application, after mapping the position coordinates to each corresponding frame of parking lot map in turn, the method includes:
determining whether the target vehicle is in a stuck state;
if not, acquiring a history matching result between the vehicle characteristics of the target vehicle and the vehicle characteristics of the tracked vehicle;
When the history matching result meets the preset matching condition, acquiring vehicle information, and adding the vehicle information to each corresponding frame of parking lot map.
In another embodiment, it is also possible to determine whether the target vehicle stays and has a violation or the like by acquiring a history matching result between the vehicle characteristics of the target vehicle and the vehicle characteristics of the tracked vehicle, adding the target vehicle to the parking lot map for each corresponding frame, and then determining whether the target vehicle stays and has a violation or the like by the target vehicle movement result on the parking lot map.
Optionally, the target vehicle currently in the lane may be screened out, and if no position change occurs within the duration preset time, the vehicle may be considered to be in a stay state, at this time, the vehicle feature of the vehicle, for example, the license plate number, may be acquired and matched with the vehicle feature of the tracked vehicle, for example, the license plate number, and if the matching is consistent, the vehicle information of the vehicle, for example, the shape, model, brand, color, license plate number, etc. of the vehicle may be acquired and added to the parking lot map, for example, the vehicle information may be set near a detection frame of the target vehicle, so that when the bird's eye view of the parking lot is viewed, the vehicle information of the target vehicle may be intuitively obtained.
If the matching is inconsistent, the vehicle is considered to be a new vehicle, and the vehicle information of the vehicle can be added into a preset tracker to serve as the vehicle to be tracked.
In one embodiment of the present application, determining whether a target vehicle is in a stuck state includes:
Acquiring vehicle characteristics of a target vehicle and a pre-stored tracked vehicle record, wherein the tracked vehicle record comprises vehicle characteristics of at least one tracked vehicle;
Matching the vehicle characteristics of the target vehicle with the vehicle characteristics of the tracked vehicle;
When the matching is consistent, acquiring the latest vehicle tracking parameters of the target vehicle, wherein the latest vehicle tracking parameters comprise the vehicle speed;
Determining whether the vehicle speed is greater than a preset speed threshold;
if not, determining that the target vehicle is in a stay state.
Optionally, the target vehicle in the video data to be detected may be tracked in real time, and a tracked vehicle record may be generated, where the tracked vehicle record includes at least one vehicle feature of the tracked vehicle, where the vehicle feature may be a license plate number, and when the target vehicle currently on the lane is screened out, the tracked vehicle feature may be matched with the vehicle feature of the tracked vehicle, and when the matching is consistent, the tracking parameter of the target vehicle may be updated to obtain the latest tracking parameter, where the latest tracking parameter may include a vehicle center point, a vehicle detection frame, a vehicle speed, and the like, and based on the vehicle speed, if the vehicle speed is greater than a preset speed threshold, for example, if the vehicle speed is greater than 0, the vehicle may be considered to be in a running state, and if otherwise the vehicle may be considered to be in a detained state.
In an embodiment of the present application, the history matching result includes a number of times of matching success, and when the history matching result meets a preset matching condition, acquiring vehicle information, and adding the vehicle information to each corresponding frame of parking lot map, including:
When the successful times of matching between the vehicle features of the target vehicle and the tracked vehicle are continuous and are larger than a first preset time threshold, determining that the vehicle features of the target vehicle and the tracked vehicle are the same vehicle, and adding a vehicle information mark into each corresponding frame of parking lot map.
Optionally, if the target vehicle is not in a stay state, that is, the speed of the target vehicle is greater than the preset speed threshold, it may be determined that the number of successful matches between the vehicle feature of the target vehicle and the vehicle feature of the tracked vehicle is continuously greater than the first preset number of times threshold, for example, 3 times, at this time, it may be determined that the vehicle feature of the target vehicle and the vehicle feature of the tracked vehicle are the same vehicle, and a vehicle information tag is added to each corresponding frame of parking lot map, for example, the vehicle information may be added near a vehicle detection frame of the vehicle in the parking lot map, so that the parked vehicle information in the parking space may be visually seen through the parking lot map, so as to determine whether the parking space is occupied by a private vehicle, whether the vehicle is parked to the charging space without a charging vehicle, so that the vehicle owner is facilitated to find the parking position of the vehicle, and so on.
The vehicle information includes, but is not limited to, license plate number, vehicle brand, vehicle type, vehicle color, etc.
In an embodiment of the present application, after adding the vehicle information to the corresponding parking lot map of each frame, the method further includes:
And when the successful times of matching between the vehicle features of the target vehicle and the tracked vehicle are larger than a second preset times threshold, removing the vehicle information in the previously recorded second preset times +1 frame, wherein the second preset times threshold is larger than the first preset times threshold.
Optionally, if the number of successful matches between the vehicle features of the target vehicle and the vehicle features of the tracked vehicle is greater than the second preset number of times threshold, for example, 60 times, the vehicle information in the first preset number of times +1 frame recorded before the vehicle features are removed from the tracker, so that only the video images of the second preset number of times need to be saved, and no video images of the whole process need to be recorded, so as to reduce the storage resources occupied by the video images.
It will be appreciated that removing the vehicle information at the tracker specifically means removing the vehicle information over a preset period of time, and only retaining the vehicle information over a period of time frame.
The second preset frequency threshold is greater than the first preset frequency threshold, the second preset frequency threshold is the sum of the successful matching times in a period of time, the first preset frequency is the successful continuous matching times, the second preset frequency threshold is required to be greater than the first preset frequency threshold, and the acquired video frames are required to meet the matching of the vehicle identities at first, and the video images of the vehicle can be continuously acquired based on the corresponding vehicle identities after the vehicle identities are successfully matched.
In an embodiment of the present application, after mapping the position coordinates to each corresponding frame of parking lot map in turn, the method includes:
determining whether the target vehicle is parked to a target parking space;
if yes, the number of the currently parked vehicles and the number of the currently idle parking spaces are obtained, and the number of the currently parked vehicles and the number of the currently idle parking spaces are added to each corresponding frame of parking lot map.
Specifically, a counting area can be constructed in each frame of parking lot map, the counting area can include the number of currently remaining idle parking spaces and the number of currently parked vehicles in the parking lot, if the target vehicle is detected to be parked in the target parking spaces, the number of currently remaining idle parking spaces and the number of currently parked vehicles in each frame of parking lot map can be updated, and the boundary line color of the counting area can be changed, so that the parking situation in the current parking lot can be determined visually through the parking lot map, whether the idle parking spaces are available for the vehicles to park or not, and the like, so that the parking spaces of the parking lot can be accurately managed and controlled.
Whether the target vehicle is parked in the target parking space can be determined by calculating the intersection point of the parking frame of the target parking space and the vehicle detection frame of the target vehicle, for example, whether the target vehicle is parked in the target parking space can be determined by calculating the intersection ratio between the parking frame and the vehicle detection frame, or whether the target vehicle is continuously intersected with the parking frame according to the central point between the vehicle detection frame of the previous frame and the vehicle detection frame of the current frame, if so, the target vehicle is parked in the target parking space.
It can be understood that, based on the corner feature and the mapping model, a parking lot map is constructed, and the number of idle positions in the parking lot can be counted and updated while the parking lot map is constructed, and after the target vehicle finishes parking, whether the target vehicle is parked to the target parking space can be judged according to the position coordinates of the target vehicle, and the number of currently parked vehicles and the number of currently idle parking spaces are added to the corresponding parking lot map in each frame.
In an embodiment of the present application, analyzing behavior of a target vehicle based on position change status information to obtain a behavior analysis result includes:
Determining a running track and a current pose of the target vehicle based on the position change state information;
Determining whether the target vehicle has a stopping violation behavior or not based on the running track and the current pose of the target vehicle;
If yes, outputting alarm information.
Specifically, the position change state information may include a running track of the target vehicle and a current pose thereof, and based on the running track and the current pose of the target vehicle, it may be determined whether the target vehicle has a parking violation, for example, whether the target vehicle is detained, whether a parking position occupies an adjacent parking space, whether the target vehicle is parked in a charging parking space, etc., if so, alarm information is output, and a field manager may be notified to execute corresponding processing, so as to avoid problems such as congestion or shielding caused by parking of other vehicles.
The alarm information can be an audible and visual alarm information or inform on-site management personnel in a mode of short messages, emergency call telephones and the like to timely process the illegal parking vehicles.
According to the embodiment of the application, the shooting device is calibrated, the mapping model between the pixel coordinate system and the world coordinate system is constructed based on the calibrated shooting device, the coordinates of the vehicle are mapped into the parking lot map based on the mapping model to obtain the aerial view, and the position state change of the vehicle and the idle state change of the parking space can be intuitively obtained based on the aerial view, so that the behavior of the vehicle can be analyzed, the number of the parking spaces and the number of the vehicle can be accurately counted, the linkage of a plurality of cameras is not needed, the cost of products and the cost of products can be reduced, and the accuracy of parking space management and control can be improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In one embodiment, a vehicle behavior analysis device is provided, which corresponds to the vehicle behavior analysis method in the above embodiment one by one. As shown in fig. 3, the vehicle behavior analysis apparatus includes a marker image data acquisition unit 10, a mapping model construction unit 20, a map construction unit 30, a coordinate information determination unit 40, a bird's-eye view generation unit 50, and a behavior analysis unit 60. The functional modules are described in detail as follows:
a marker image data acquisition unit 10 for acquiring a target scene image and acquiring marker image data located in the target scene image;
A mapping model construction unit 20 for constructing a mapping model from the pixel coordinate system to the world coordinate system based on the marker image data;
the map construction unit 30 is configured to collect corner features of a plurality of parking spaces, and construct a parking lot map based on the corner features and the mapping model;
A coordinate information determining unit 40 for acquiring video data to be detected, and determining a position coordinate of the target vehicle in each video frame when at least one target vehicle is detected to exist in the video data to be detected;
A bird's-eye view generating unit 50, configured to map the position coordinates to each corresponding frame of parking lot map in sequence through the mapping model, so as to generate a bird's-eye view;
The behavior analysis unit 60 is configured to determine position change state information of the target vehicle in the parking lot map based on the bird's eye view, and analyze behavior of the target vehicle based on the position change state information to obtain a behavior analysis result.
In an embodiment of the present application, the mapping model construction unit 20 is further configured to:
Calibrating the target shooting device based on the marker image data;
Obtaining output parameters of the calibrated target shooting device, wherein the output parameters at least comprise an external reference matrix, an internal reference matrix and distortion parameters;
Based on the extrinsic, intrinsic and distortion parameters, a mapping model is constructed from the pixel coordinate system to the world coordinate system.
In an embodiment of the present application, the apparatus further includes a vehicle information adding unit configured to:
determining whether the target vehicle is in a stuck state;
if not, acquiring a history matching result between the vehicle characteristics of the target vehicle and the vehicle characteristics of the tracked vehicle;
When the history matching result meets the preset matching condition, acquiring vehicle information, and adding the vehicle information to each corresponding frame of parking lot map.
In an embodiment of the present application, the vehicle information adding unit is further configured to:
Acquiring vehicle characteristics of a target vehicle and a pre-stored tracked vehicle record, wherein the tracked vehicle record comprises vehicle characteristics of at least one tracked vehicle;
Matching the vehicle characteristics of the target vehicle with the vehicle characteristics of the tracked vehicle;
When the matching is consistent, acquiring the latest vehicle tracking parameters of the target vehicle, wherein the latest vehicle tracking parameters comprise the vehicle speed;
Determining whether the vehicle speed is greater than a preset speed threshold;
if not, determining that the target vehicle is in a stay state.
In an embodiment of the present application, the history matching result includes a number of matching successes, and the vehicle information adding unit is further configured to:
and when the successful times of matching between the vehicle features of the target vehicle and the tracked vehicle are continuous and are larger than a first preset time threshold, determining that the vehicle features of the target vehicle and the tracked vehicle are the same vehicle, and adding the vehicle information mark into the corresponding parking lot map of each frame.
In an embodiment of the present application, the history matching result includes a number of matching successes, and the vehicle information adding unit is further configured to:
and when the successful times of matching between the vehicle features of the target vehicle and the tracked vehicle features of the tracked vehicle are larger than a second preset times threshold, removing the vehicle information in the previously recorded second preset times +1 frame, wherein the second preset times threshold is larger than the first preset times threshold.
In an embodiment of the application, the device further comprises a counting unit for:
determining whether the target vehicle is parked to a target parking space;
if yes, the number of the currently parked vehicles and the number of the currently idle parking spaces are obtained, and the number of the currently parked vehicles and the number of the currently idle parking spaces are added to each corresponding frame of parking lot map.
In an embodiment of the present application, the behavior analysis unit 60 is further configured to:
Determining a running track and a current pose of the target vehicle based on the position change state information;
Determining whether the target vehicle has a stopping violation behavior or not based on the running track and the current pose of the target vehicle;
If yes, outputting alarm information.
According to the embodiment of the application, the shooting device is calibrated, the mapping model between the pixel coordinate system and the world coordinate system is constructed based on the calibrated shooting device, the coordinates of the vehicle are mapped into the parking lot map based on the mapping model to obtain the aerial view, and the position state change of the vehicle and the idle state change of the parking space can be intuitively obtained based on the aerial view, so that the behavior of the vehicle can be analyzed, the number of the parking spaces and the number of the vehicle can be accurately counted, the linkage of a plurality of cameras is not needed, the cost of products and the cost of products can be reduced, and the accuracy of parking space management and control can be improved.
The specific definition of the vehicle behavior analysis device may be referred to the definition of the vehicle behavior analysis method hereinabove, and will not be described in detail herein. The respective modules in the above-described vehicle behavior analysis apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal device, and the internal structure thereof may be as shown in fig. 3. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a readable storage medium. The readable storage medium stores computer readable instructions. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer readable instructions when executed by a processor implement a vehicle behavior analysis method. The readable storage medium provided by the present embodiment includes a nonvolatile readable storage medium and a volatile readable storage medium.
In an embodiment of the present application, a computer device is provided that includes a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor executing the computer readable instructions to perform the steps of the vehicle behavior analysis method as described above.
In an embodiment of the application, a readable storage medium is provided, the readable storage medium storing computer readable instructions that, when executed by a processor, implement the steps of a vehicle behavior analysis method as described above.
Those skilled in the art will appreciate that implementing all or part of the above described embodiment methods may be accomplished by instructing the associated hardware by computer readable instructions, which may be stored on a non-volatile readable storage medium or a volatile readable storage medium, which when executed may comprise the above described embodiment methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.