Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to fall within the scope of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two, but does not exclude the case of at least one.
It should be understood that the term "and/or" as used herein is merely an association relationship describing the associated object, and means that there may be three relationships, e.g., a and/or B, and that there may be three cases where a exists alone, while a and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of a vehicle stop-break evidence obtaining method according to the present application. In this embodiment, the vehicle stop-break evidence obtaining method includes:
And S11, detecting the target vehicle to obtain first detection information and second detection information, wherein the first detection information and the second detection information comprise ID information generated based on the target vehicle.
In this embodiment, the target vehicle is a parking-prohibited vehicle.
Specifically, the parking-violating vehicle is a vehicle parked in a parking-violating detection zone. The illegal parking detection areas are set by users or traffic police, and if the parking time is long in the areas, illegal parking evidence collection is carried out.
In this embodiment, the first detection information and the second detection information are acquired by a monitoring camera provided on the traffic road network. The monitoring camera may be a spherical camera or other types of cameras, which is not limited in this aspect of the application.
Specifically, the first detection information is information acquired based on perspective detection, and includes ID information generated based on the target vehicle, a detection frame of the target vehicle in the screen, a first control parameter, and a detection time. The second detection information is information obtained based on close-range detection and comprises ID information generated based on the target vehicle, a detection frame of the target license plate in a picture, a second control parameter and detection time.
The acquisition order of the first detection information and the second detection information can be set based on the user requirement. In a specific implementation scenario, the first detection information may be acquired based on distant view detection, followed by the second detection information based on close view detection. In another specific implementation scenario, the second detection information may be acquired based on the close-range detection first, followed by the first detection information based on the distant-range detection. The application is not limited in this regard.
In this embodiment, the first detection information and the second detection information are only structured data, and are not snapshot images, and the subsequent camera may take a snapshot based on the first control parameter and the second control parameter in the structured data.
Wherein, the structured data, also called row data, is data logically expressed and realized by a two-dimensional table structure, strictly follows the data format and length specification, and is mainly stored and managed by a relational database.
As can be appreciated, in the present embodiment, by performing distant view detection and close view detection on the target vehicle and associating the target vehicle with ID information generated by the two detections, the detection stability of the same target vehicle can be enhanced, and detection errors caused by only distant view detection can be avoided.
And S12, matching the ID information generated based on the target vehicle and included in the first detection information and the second detection information with the ID information stored in the database.
In this embodiment, the database stores a large number of captured images of the offending vehicle and corresponding ID information, and by matching the ID information generated based on the target vehicle included in the first detection information and the second detection information with the ID information stored in the database, it is possible to determine whether the detection for the target vehicle is the first detection.
And S13, in response to the fact that the ID information generated based on the target vehicle does not exist in the database, the first detection information and the second detection information are stored as initial first detection information and second initial detection information, wherein the first initial detection information comprises a first control parameter, and the second initial detection information comprises license plate information of the target vehicle and a second control parameter.
In the present embodiment, if no matching ID information exists in the database, it is indicated that the detection performed for the target vehicle is the first detection.
It can be understood that the first detection information and the second detection information obtained by the first detection are stored as the initial first detection information and the second initial detection information, so that when the target vehicle is detected later, the detection is known to be non-first detection through matching, so that the algorithm is prevented from judging that a new target appears, the waste of the snapshot image is avoided, and the snapshot efficiency is improved.
And S14, capturing the target vehicle based on the first control parameters and/or the second control parameters to obtain a plurality of captured images, wherein the image captured based on the first control parameters is a long-range image comprising the body information of the target vehicle, and the image captured based on the second control parameters is a short-range image comprising the license plate information.
In this embodiment, the first control parameter and the second control parameter are control parameters corresponding to the distant view detection preset bit and the close view detection preset bit, respectively, and the camera may call the control parameters of the corresponding preset bit to take a snapshot based on the type of the captured image.
In this embodiment, specifically invoking the first control parameter or the second control parameter to take a snapshot is determined based on a preset image type set by the user for each of the captured images. The preset image types comprise a distant view image, a close-up image, a middle view image and a close-up image.
In a specific implementation scenario, if the user sets the image type of the first captured image to be the distant view image, the first control parameter is called to capture the image under the distant view detection after the detection is completed, so as to obtain the corresponding captured image, and the serial number of the captured image is marked as the initial number. In another specific implementation scenario, if the user sets the image type of the first captured image to be a close-up image or a close-up image, after detection is completed, invoking a second control parameter to capture the image under close-up detection, so as to obtain a corresponding captured image, and marking the captured image as an initial number. In yet another specific implementation scenario, if the user sets the image type of the first snap shot image to a middle view image, neither the far view detection nor the near view detection will take a picture.
In this embodiment, after the capturing of the first image is completed, capturing is performed based on a preset time interval and the image type of the next captured image until the required multiple captured images are obtained.
And S15, responding to the last snapshot image as a close-up image of the target vehicle, comparing license plate information extracted from the last snapshot image with license plate information in the second initial detection information, and generating a stop-breaking evidence obtaining result of the target vehicle based on the first comparison result.
In this embodiment, the first comparison result is that license plate information extracted from the last captured image is the same as license plate information in the second initial detection information, and the target vehicle does not move in the whole evidence obtaining time between the first detection and the acquisition of the last captured image.
It can be understood that by extracting license plate information in the last close-range image and comparing the license plate information with the license plate information extracted from the second initial detection information, more effective information can be obtained for matching, so as to avoid algorithm detection errors.
Further, the existing evidence obtaining time of illegal stop is generally tens of minutes, in this embodiment, a plurality of snapshot images are obtained in the evidence obtaining time of illegal stop, and when license plate information extracted from the last snapshot image is matched with license plate information in the second initial detection information, a first comparison result is generated, and based on the first comparison result, the obtained plurality of snapshot images are utilized to synthesize a evidence chain map of illegal stop, so that a evidence obtaining result of illegal stop is generated based on the evidence chain map of illegal stop.
In other embodiments, in response to the last captured image being a long-range image or a middle-range image of the target vehicle, the license plate of the target vehicle is further required to be detected by calling a second control parameter after the capturing is completed, so as to obtain third detection information.
It can be appreciated that if the last captured image is a long-range image or a medium-range image, effective license plate information may not be extracted from the image, and may not be compared with license plate information in the second detection information obtained by the first detection, so that the license plate information of the target vehicle needs to be separately detected by calling the second control parameter, and the third detection information obtained by the first detection is stored only with structured data, instead of photographing again.
Compared with the prior art, the method and the device have the advantages that the ID information of the target vehicle, which is contained in the first detection information and the second detection information and is obtained through detection, is matched with the ID information stored in the database, after the first detection information and the second detection information are determined to be the information obtained through first detection, the first detection information and the second detection information are stored as the first initial detection information and the second initial detection information, whether the detection of the target vehicle is the first detection or not can be determined, so that the algorithm is prevented from judging that a new target appears, the waste of the snapshot image is avoided, and the snapshot efficiency is improved. Further, by extracting license plate information in the last close-range image and comparing the license plate information with license plate information extracted from the second initial detection information, more effective information can be obtained for matching, so that algorithm detection errors are avoided, further, a plurality of obtained snapshot images can be utilized for efficiently generating a illegal stop evidence obtaining result, evidence obtaining efficiency is improved, and the problems of low snapshot rate and low evidence obtaining rate when long-time detection is carried out on the same target vehicle are solved.
Referring to fig. 2, fig. 2 is a flowchart of a second embodiment of the method for obtaining evidence of vehicle stop violations according to the present application. In this embodiment, the vehicle stop-break evidence obtaining method includes:
S21, acquiring the preset snapshot quantity of the images to be snapshot, the preset image type of each image to be snapshot and the preset time interval between two adjacent images to be snapshot, wherein the preset image types comprise a far view image, a near view image, a middle view image and a close-up image.
In this embodiment, the preset capturing number, the preset image type of each image to be captured, and the preset time interval between two adjacent images to be captured are all set by the user based on the requirement, which is not limited in the present application.
For example, the preset snapshot number may be 4, the first to-be-snapshot image is a close-range image, the second to-be-snapshot image is a far-range image, the third to-be-snapshot image is a middle-range image, the fourth to-be-snapshot image is a close-range image, the time interval between the first to-be-snapshot image and the second to-be-snapshot image is 5min, the time interval between the second to-be-snapshot image and the third to-be-snapshot image is 20min, and the time interval between the third to-be-snapshot image and the fourth to-be-snapshot image is 35min.
In this embodiment, the far view image and the middle view image are video images obtained by the dome camera under a small magnification and a secondary small magnification, the far view image and the middle view image include a vehicle body detection frame, and the middle view image includes a vehicle body detection frame with a larger area than the far view image, so that the relative positions of the target vehicle and the surrounding scenery can be extracted from the far view image and the middle view image. The close-range image is a video picture image obtained by the spherical camera under a high magnification, the close-range image comprises a license plate detection frame, and license plate information, part of detail features of a vehicle body and part of background environment of a target vehicle can be extracted from the close-range image. The close-up image is an image presented according to the target vehicle according to the set proportion of the user after the close-up image is processed by software.
S22, detecting the target vehicle to obtain first detection information and second detection information, wherein the first detection information and the second detection information comprise ID information generated based on the target vehicle.
The specific process is described in S11, and will not be described here again.
And S23, matching the ID information generated based on the target vehicle and included in the first detection information and the second detection information with the ID information stored in the database.
The specific process is described in S12, and will not be described here again.
And S24, responding to the fact that the ID information generated based on the target vehicle does not exist in the database, and storing the first detection information and the second detection information as initial first detection information and second initial detection information, wherein the first initial detection information comprises a first cloud deck control parameter, and the second initial detection information comprises license plate information of the target vehicle and the second cloud deck control parameter.
In the present embodiment, if no matching ID information exists in the database, it is indicated that the detection performed for the target vehicle is the first detection.
In this embodiment, the first initial detection information further includes a first initial detection frame for marking the target vehicle. The second initial detection information also comprises a second initial detection frame used for marking the license plate of the target vehicle. The first control parameter is a first cradle head control parameter calibrated by a first initial detection frame, and the second control parameter is a second cradle head control parameter calibrated by a second initial detection frame.
Specifically, the pan-tilt control parameters include a horizontal parameter, a vertical parameter, and a zoom parameter. The horizontal parameter is the current horizontal inclination angle of the spherical camera, the vertical parameter is the current vertical inclination angle of the spherical camera, and the scaling parameter is the current zoom value of the spherical camera.
In this embodiment, the first pan-tilt control parameter is a spherical camera parameter of the spherical camera at the optimal capturing position under the long-range detection, and the second pan-tilt control parameter is a spherical camera parameter of the spherical camera at the optimal capturing position under the short-range detection. After the first pan-tilt control parameter and the second pan-tilt control parameter are calibrated, the corresponding pan-tilt control parameter can be directly called based on the preset image type of the image to be captured in each subsequent capturing, so that the spherical camera can be quickly adjusted to the preset position state of the corresponding detection scene, and the capturing efficiency is improved.
Further, the target vehicle is subjected to snapshot based on the first cradle head control parameters and/or the second cradle head control parameters, so that a plurality of snapshot images are obtained.
S25, acquiring a preset image type of the first image to be captured, calling a corresponding cradle head control parameter based on the preset image type of the first image to be captured to capture the target vehicle, acquiring the first captured image, marking the capture sequence number as an initial number, and recording capture time.
In a specific implementation scenario, in response to a first image to be captured being a distant view image, a first pan-tilt control parameter is called to capture a target vehicle, a first captured image is obtained, a capture image sequence number is marked as an initial number, and capture time is recorded.
In another specific implementation scene, in response to the first image to be captured being a close-up image or a close-up image, invoking a second pan-tilt control parameter to capture the target vehicle, obtaining the first captured image, marking the capture sequence number as an initial number, and recording capture time.
When the acquired first snapshot image is a close-range image, license plate information and second detection frame information extracted from the close-range image need to be recorded. The second detection frame is a detection frame added to the license plate.
In another specific implementation scenario, in response to the first image to be captured being a middle-view image, a third pan-tilt control parameter is obtained based on the first pan-tilt control parameter and the second pan-tilt control parameter, the third pan-tilt parameter is called to capture the target vehicle, the first captured image is obtained, the capture sequence number is marked as an initial number, and meanwhile capture time is recorded.
The method comprises the steps of obtaining a third pan-tilt control parameter based on a first pan-tilt control parameter and a second pan-tilt control parameter, specifically comprising the steps of obtaining a second horizontal parameter and a second vertical parameter corresponding to the second pan-tilt control parameter as a third horizontal parameter and a third vertical parameter corresponding to the third pan-tilt control parameter, obtaining a first scaling parameter corresponding to the first pan-tilt control parameter and a second scaling parameter corresponding to the second pan-tilt control parameter, calculating to obtain a difference value between the second scaling parameter and the first scaling parameter, multiplying the difference value by a preset multiplying factor, and adding the difference value with the first scaling parameter to obtain a third scaling parameter corresponding to the third pan-tilt control parameter.
For example, if the first pan-tilt control parameter is (Pf,Tf,Zf), the second pan-tilt control parameter is (Pn,Tn,Zn), the third pan-tilt control parameter obtained based on the above method is (Pn,Tn,(Zn-Zf) ×Φ), where Pf is a first horizontal parameter, Tf is a first vertical parameter, Zf is a first scaling parameter, Pn is a second horizontal parameter, Tn is a second vertical parameter, Zn is a second scaling parameter, and Φ is a magnification factor. The multiplying factor can be set by a user or can be automatically selected by a program.
In this embodiment, the initial numbers are all set to 1.
S26, acquiring a preset image type of the second image to be captured in response to the difference value between the current time and the capture time recorded in the first image to be captured being equal to or greater than a preset time interval between the first image to be captured and the second image to be captured.
S27, calling corresponding pan-tilt control parameters based on the preset image type of the second image to be captured, capturing the target vehicle or license plate to obtain the second captured image, adding 1 to the capture sequence number, and recording capture time.
In this embodiment, the specific calling mode and the snapshot mode refer to the description in S26, and are not repeated here.
And S28, repeating the steps until a plurality of snap shots with the same number as the preset snap shots are obtained.
For example, if the number of the preset shots is 4, after the 4 images are shot based on the preset number of the images to be shot, the preset image type of each image to be shot and the preset time interval between two adjacent images to be shot, stopping shooting the target vehicle.
And S29, responding to the last snapshot image as a close-up image of the target vehicle, comparing license plate information extracted from the last snapshot image with license plate information in the second initial detection information, and generating a stop-breaking evidence obtaining result of the target vehicle based on the first comparison result.
Specifically, referring to fig. 3, fig. 3 is a flow chart of an embodiment of S29 in fig. 2. In this embodiment, in response to the last captured image being a close-up image of the target vehicle, the step of comparing license plate information extracted from the last captured image with license plate information in the second initial detection information, and generating a stop-breaking evidence obtaining result of the target vehicle based on the first comparison result specifically includes:
And S291, responding to the last snapshot image as a close-range image of the target vehicle, wherein license plate information extracted from the close-range image is identical to license plate information included in the second initial detection information, and acquiring the center point coordinate of a second final detection frame in the last snapshot image.
It can be appreciated that the license plate information can be extracted from the last close-up image, indicating that the license plate is not blocked, and the extracted license plate information is the same as the license plate information detected for the first time, indicating that the snapshot is still performed for the same target vehicle.
And S292, calculating based on the center point coordinates of the second final detection frame and the center point coordinates of the second initial detection frame.
In this embodiment, it is calculated whether the difference between the center point coordinates of the second final detection frame and the center point coordinates of the second initial detection frame is smaller than a preset deviation value.
In a specific implementation scenario, assuming that the center point coordinate of the second initial detection frame is (xF,yF), the center point coordinate of the second final detection frame is (xL,yL), it is determined whether (xF,yF) and (xL,yL) satisfy the following formulas:
Wherein xF is the abscissa of the center point of the second initial detection frame, yF is the ordinate of the center point of the second initial detection frame, xL is the abscissa of the center point of the second final detection frame, yL is the ordinate of the center point of the second final detection frame, and δ is the pixel deviation value. The pixel deviation value can be set by a user or can be automatically selected by a program.
And S293, generating a first comparison result that the target vehicle does not move in the snapshot process in response to the calculated result being smaller than or equal to the preset deviation value.
It can be appreciated that, in response to the calculation result being less than or equal to the preset deviation value, it is indicated that the detection frame added based on the license plate is not moved, that is, the target vehicle is not moved in the whole snapshot process.
And S294, carrying out mapping operation on the plurality of snap shots based on the first comparison result to generate a illicit stop evidence obtaining result based on the synthesized image.
In this embodiment, after obtaining a first comparison result whose content is that license plate information is consistent and that a vehicle has not moved in the whole evidence obtaining process is determined, a mapping operation is performed on the obtained plurality of snap-shot images to generate a linkage diagram of evidence of illicit stop, and a evidence obtaining result of illicit stop is generated based on the linkage diagram of evidence of illicit stop.
In another embodiment, in response to the last captured image being a close-up image, license plate information extracted from the last captured image is compared with license plate information included in the second initial detection information, and a plurality of captured images are released based on the second comparison result.
In a specific implementation scenario, a second comparison result is generated in response to the license plate information extracted from the last captured image being different from the license plate information included in the second initial detection information.
In another specific scene, the license plate information extracted from the last snapshot image is the same as the license plate information included in the second initial detection information, but the calculated result is larger than a preset deviation value, and a second comparison result is generated.
It can be understood that the license plate information is different, indicating that the same target vehicle is not used for evidence collection and is invalid, and the calculated result is larger than the preset deviation value, indicating that the same target vehicle moves within the evidence collection time and that the evidence collection is still invalid.
In still another embodiment, in response to the last captured image being a long-range image or a medium-range image of the target vehicle, invoking a second pan-tilt control parameter to detect a license plate of the target vehicle, and obtaining third detection information. The third detection information comprises license plate information and a second final detection frame. And acquiring the center point coordinate of the second final detection frame in response to the license plate information contained in the third detection information being the same as the license plate information contained in the second initial detection information. And calculating based on the center point coordinates of the second final detection frame and the center point coordinates of the second initial detection frame. And generating a first comparison result that the target vehicle does not move in the snapshot process in response to the calculated result being smaller than or equal to the preset deviation value. And carrying out mapping operation on the plurality of snap shots based on the first comparison result so as to generate a illicit stop evidence obtaining result based on the synthesized image.
And generating a second comparison result and releasing a plurality of snap-shot images acquired in the evidence obtaining process in response to the license plate information contained in the third detection information being different from the license plate information contained in the second initial detection information or in response to the license plate information contained in the third detection information being the same as the license plate information contained in the second initial detection information but the calculated result being greater than a preset deviation value.
It can be understood that if the last captured image is a long-range image or a middle-range image of the target vehicle, license plate information cannot be obtained from the last captured image effectively, so that license plates of the target vehicle need to be detected again to improve detection accuracy.
Referring to fig. 4, fig. 4 is a workflow diagram of an application scene of capturing images based on non-first-time images to be captured according to the present application. In this embodiment, after the current time is acquired, it is determined whether or not a captured image of the target vehicle exists in the database. If the target vehicle does not exist, the condition that the target vehicle is not subjected to snapshot is indicated, the target vehicle needs to be detected, and the current snapshot flow is ended. If so, indicating that the capturing is performed based on the target vehicle, namely that the related preset information (comprising the preset image type of the next image to be captured and the preset time interval between two adjacent images to be captured) is acquired, and acquiring the capturing time of the last captured image of the target vehicle with the same ID information in the database. And acquiring a difference value between the current time and the snapshot time of the last snapshot image, and judging whether the difference value is equal to or larger than a preset time interval. If the difference value is smaller than the preset time interval, the current time does not reach the snapshot time, and the current snapshot flow is ended. If the difference value is equal to or greater than the preset time interval, indicating that the current time reaches the snapshot time, calling a corresponding pan-tilt control parameter based on the preset image type of the next image to be snapshot, snapshot the body or license plate of the target vehicle, adding 1 to the snapshot sequence number, and simultaneously recording the snapshot time, wherein the snapshot process is ended.
Referring to fig. 5, fig. 5 is a flowchart of a third embodiment of the method for obtaining evidence of vehicle stop violation according to the present application. In this embodiment, the vehicle stop-break evidence obtaining method includes:
and S51, detecting the target vehicle to obtain first detection information and second detection information, wherein the first detection information and the second detection information comprise ID information generated based on the target vehicle.
The specific process is described in S11, and will not be described here again.
And S52, matching the ID information generated based on the target vehicle and included in the first detection information and the second detection information with the ID information stored in the database.
The specific process is described in S12, and will not be described here again.
And S53, acquiring a preset image type of the next image to be captured in response to the existence of the ID information generated based on the target vehicle in the database.
In this embodiment, the database includes ID information generated based on the target vehicle, which indicates that the current detection is not the first detection, and the preset number of shots of the images to be shot, the preset image type of each image to be shot, and the preset time interval between two adjacent images to be shot have been acquired before.
And S54, responding to the fact that the preset image type of the next image to be shot is a close-up image or a close-up image, wherein the number of the shots of the next image to be shot is equal to the preset number of shots, and calling a second cradle head control parameter to perform shooting on the target vehicle to obtain the last shot image, wherein the number of the shots is increased by 1.
In this embodiment, the fact that the number of the shots of the next image to be shot is equal to the number of the preset shots indicates that the shot is the last shot, and too many Zhang Zhuapai images have been acquired before. The specific snapshot process is described in S25 to S27, and will not be described herein.
In other embodiments, in response to the preset image type of the next image to be captured being a long-range image or a medium-range image, and the capture sequence number of the next image to be captured being equal to the preset capture number, the first pan-tilt control parameter is invoked to capture the target vehicle, and the last captured image is obtained, wherein the capture sequence number is increased by 1. And after the last image is captured, the second cradle head control parameter is called to detect the target vehicle, and third detection information is obtained. The third detection information comprises license plate information and a second final detection frame.
And S55, responding to the last snapshot image as a close-up image of the target vehicle, comparing license plate information extracted from the last snapshot image with license plate information in the second initial detection information, and generating a stop-breaking evidence obtaining result of the target vehicle based on the first comparison result.
The specific process is described in S29 and S291-S294, and will not be described here again.
Referring to fig. 6, fig. 6 is a flowchart of an application scenario of the vehicle stop violation evidence obtaining method according to the present application, and fig. 7 is a schematic diagram of a specific flow of non-first detection in fig. 6. In this embodiment, the target vehicle is detected, the first detection information and the second detection information are acquired, and the ID information generated based on the target vehicle and included in the first detection information and the second detection information are matched with the ID information stored in the database. In response to the absence of the ID information generated based on the target vehicle in the database, the first detection information is saved as initial first detection information. And judging whether the preset image type of the first image to be captured is a distant view image or not. And responding to the first image to be captured as a distant view image, calling a first control parameter to capture the target vehicle to obtain the first captured image, marking the serial number of the captured image as an initial number, and recording the capturing time. And responding to the first image to be captured as a close-up image or a close-up image, storing the second detection information as initial second detection information, and judging whether the preset image type of the first image to be captured is the close-up image or not. And responding to the first image to be captured as a close-up image or a close-up image, calling a second control parameter to capture the target vehicle to obtain the first captured image, marking the serial number of the captured image as an initial number, and recording the capturing time.
Wherein non-first detection is entered in response to the presence of ID information in the database generated based on the target vehicle. Judging whether the preset image type of the image to be captured is a close-up image or a close-up image. And responding to the preset image type of the image to be captured as a close-up image or a close-up image, acquiring the captured image, and extracting license plate information and second detection frame information in the captured image. And judging whether the snapshot sequence number of the current snapshot image is equal to the preset snapshot number. And responding to the fact that the snapshot sequence number of the current snapshot image is not equal to the preset snapshot number, and ending the non-first detection flow. And responding to the fact that the snapshot sequence number of the current snapshot image is equal to the preset snapshot number, comparing license plate information and second detection frame information extracted from the current snapshot image with license plate information and second initial detection frame information extracted from second initial detection information, and judging whether a first comparison result is generated or not. And releasing the cached multiple snap shots in response to the fact that the first comparison result is not generated, and ending the non-first detection flow. And responding to the generation of a first comparison result, and synthesizing a stop-breaking evidence chain diagram by using the acquired multiple snap images, wherein the non-first detection flow is ended.
And responding to the fact that the preset image type of the image to be captured is not a close-up image or a close-up image, acquiring the captured image, and judging whether the capture sequence number of the captured image is equal to the preset capture quantity. And responding to the fact that the snapshot sequence number of the current snapshot image is not equal to the preset snapshot number, and ending the non-first detection flow. And calling a second cradle head control parameter to detect the target vehicle in response to the snapshot sequence number of the current snapshot image being equal to the preset snapshot number to obtain third detection information, comparing license plate information and second detection frame information extracted from the third detection information with license plate information and second initial detection frame information extracted from second initial detection information, and judging whether to generate a first comparison result. And releasing the cached multiple snap shots in response to the fact that the first comparison result is not generated, and ending the non-first detection flow. And responding to the generation of a first comparison result, and synthesizing a stop-breaking evidence chain diagram by using the acquired multiple snap images, wherein the non-first detection flow is ended.
Correspondingly, the application provides a vehicle illegal stop evidence obtaining device.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a vehicle stop-and-go evidence obtaining device according to the present application. As shown in fig. 8, the vehicle stop evidence obtaining device 80 includes a detection module 81, a matching module 82, a saving module 83, a snapshot module 84, and a generating module 85.
The detection module 81 is configured to detect a target vehicle, and obtain first detection information and second detection information, where the first detection information and the second detection information include ID information generated based on the target vehicle.
And a matching module 82 for matching the ID information generated based on the target vehicle included in the first detection information and the second detection information with the ID information stored in the database.
The storing module 83 is configured to store, in response to the absence of the ID information generated based on the target vehicle in the database, the first detection information and the second detection information as initial first detection information and second initial detection information, where the first initial detection information includes a first control parameter, and the second initial detection information includes license plate information of the target vehicle and a second control parameter.
The snapshot module 84 is configured to take a snapshot of the target vehicle based on the first control parameter and/or the second control parameter to obtain a plurality of snapshot images, where the image that is taken based on the second control parameter is a close-up image that includes license plate information.
The generating module 85 is configured to compare license plate information extracted from the last captured image with license plate information in the second initial detection information in response to the last captured image being a close-up image of the target vehicle, and generate a stop-violation evidence obtaining result of the target vehicle based on the first comparison result.
The specific process is described in the related text descriptions of S11-S15, S21-S29, S291-S294 and S51-S55, which are not described herein.
In contrast to the prior art, in this embodiment, the matching module 82 matches the ID information of the target vehicle included in the first detection information and the second detection information obtained by detection with the ID information stored in the database, and after determining that the first detection information and the second detection information are the information obtained by first detection, the storage module 83 stores the first detection information and the second detection information as the first initial detection information and the second initial detection information, so as to determine whether the detection of the target vehicle is the first detection, so as to avoid the algorithm from determining that a new target appears, thereby avoiding the waste of the snap shot image and further improving the snap shot efficiency. Further, capturing is performed through the capturing module 84, license plate information in the last close-up image is extracted through the generating module 85, and compared with license plate information extracted from the second initial detection information, more effective information can be obtained for matching, so that algorithm detection errors are avoided, further, a plurality of captured images can be utilized for efficiently generating a stopping and evidence obtaining result, and therefore evidence obtaining efficiency is improved, and the problems of low capturing rate and low evidence obtaining rate when long-time detection is performed on the same target vehicle are solved.
Correspondingly, the application provides electronic equipment.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 9, the electronic device 90 includes a memory 91 and a processor 92.
In the present embodiment, the memory 91 is used for storing program data, which when executed, implements steps in the above-described vehicle parking and evidence obtaining method, and the processor 92 is used for executing program instructions stored in the memory 91 to implement steps in the above-described vehicle parking and evidence obtaining method.
Specifically, the processor 92 is configured to control itself and the memory 91 to implement the steps in the vehicle stop-break evidence-obtaining method as described above. The processor 92 may also be referred to as a CPU (Central Processing Unit ). The processor 92 may be an integrated circuit chip with signal processing capabilities. The Processor 92 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 92 may be commonly implemented by a plurality of integrated circuit chips.
In contrast to the prior art, in this embodiment, the processor 92 matches the ID information of the target vehicle included in the first detection information and the second detection information obtained by detection with the ID information stored in the database, and after determining that the first detection information and the second detection information are the information obtained by first detection, stores the first detection information and the second detection information as the first initial detection information and the second initial detection information, so as to determine whether the detection of the target vehicle is the first detection, so as to avoid the algorithm from determining that a new target appears, thereby avoiding the waste of the snapshot image, and further improving the snapshot efficiency. Further, by extracting license plate information in the last close-range image and comparing the license plate information with license plate information extracted from the second initial detection information, more effective information can be obtained for matching, so that algorithm detection errors are avoided, further, a plurality of obtained snapshot images can be utilized for efficiently generating a illegal stop evidence obtaining result, evidence obtaining efficiency is improved, and the problems of low snapshot rate and low evidence obtaining rate when long-time detection is carried out on the same target vehicle are solved.
Accordingly, the present application provides a computer-readable storage medium.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a computer readable storage medium according to the present invention.
The computer readable storage medium 100 comprises a computer program 1001 stored on the computer readable storage medium 100, which computer program 1001, when executed by the above-mentioned processor, implements the steps in a vehicle stop-violation forensic method as described above. In particular, the integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium 100. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a computer-readable storage medium 100, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. The computer-readable storage medium 100 includes various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and obvious mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, if the personal voluntarily enters the acquisition range, the personal information is considered as consent to acquire the personal information, or if a clear mark/information is used on a personal information processing device to inform that the personal information processing rule is used, personal authorization is obtained through popup information or a mode of requesting the personal information to upload the personal information by the personal, wherein the personal information processing rule can comprise information such as a personal information processor, a personal information processing purpose, a processing mode, a processed personal information type and the like.