Movatterモバイル変換


[0]ホーム

URL:


CN113298059A - Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium - Google Patents

Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium
Download PDF

Info

Publication number
CN113298059A
CN113298059ACN202110848718.6ACN202110848718ACN113298059ACN 113298059 ACN113298059 ACN 113298059ACN 202110848718 ACN202110848718 ACN 202110848718ACN 113298059 ACN113298059 ACN 113298059A
Authority
CN
China
Prior art keywords
image
pantograph
gray
video
gaussian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110848718.6A
Other languages
Chinese (zh)
Inventor
熊仕勇
左超华
董彬
蒋华强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan High New Track Traffic Intelligent Equipment Co ltd
Original Assignee
Kunshan High New Track Traffic Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan High New Track Traffic Intelligent Equipment Co ltdfiledCriticalKunshan High New Track Traffic Intelligent Equipment Co ltd
Priority to CN202110848718.6ApriorityCriticalpatent/CN113298059A/en
Publication of CN113298059ApublicationCriticalpatent/CN113298059A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the technical field of rail transit, and discloses a pantograph foreign matter intrusion detection method, a device, computer equipment, a system and a storage medium, wherein a video image acquired at the top of a vehicle in real time is firstly identified and positioned based on a region of interest ROI positioning technology, a pantograph region is extracted, a pantograph gray image is extracted, then on one hand, a Gaussian mixture model is used for segmenting the pantograph gray image into a foreground and a background to obtain a foreground object binary image, on the other hand, a three-frame difference method is used for detecting a moving object of the pantograph gray image to obtain a moving object binary image, finally, the two binary images are subjected to logic and operation results, and whether intrusion foreign matters exist in the operation results is determined by combining a contour area threshold value, so that the pantograph state can be monitored online in real time in the running process of the locomotive, and automatically and accurately finding whether foreign matters invade the working area of the pantograph or not, and providing guarantee for the safe and stable operation of the electric locomotive.

Description

Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium
Technical Field
The invention belongs to the technical field of rail transit, and particularly relates to a pantograph foreign matter intrusion detection method, a pantograph foreign matter intrusion detection device, computer equipment, a pantograph foreign matter intrusion detection system and a pantograph foreign matter intrusion detection storage medium.
Background
In recent years, inter-city electric locomotives have been rapidly developed in China due to the characteristics of rapidness, safety, stability, timekeeping, comfort, energy conservation and the like. The electric locomotive acquires electric energy by current collection through the sliding contact of a pantograph device arranged at the top of the electric locomotive and a contact net arranged along a railway, and provides basic kinetic energy for the operation of the locomotive. With the increase of the operating load of the electric locomotive, the increase of the operating mileage and the increase of the operating speed, the pantograph is easy to have fault defects such as abnormal impact, abrasion and the like. According to statistics, in the power failure and outage accidents of the electrified railway system in China, the pantograph-catenary accidents account for about 80 percent of the total accidents. Therefore, for the driving safety of the electric locomotive, it is necessary to detect the pantograph fault in time to ensure the normal operation of the pantograph-catenary system.
At present, the fault detection of the pantograph mainly comprises a manual detection mode and an image detection mode, wherein the former needs to drive an electric locomotive into a locomotive service section, and after the locomotive is stopped, the pantograph is lowered and is powered off, a field engineer carries out roof climbing to check the fault, and whether the pantograph has a fault or not can be judged by virtue of abundant practical experience, so that the pantograph can be supported to continue operation. In summary, the manual detection method has the disadvantages of large workload, low efficiency and potential safety hazard, and cannot detect the real working state of the pantograph of the electric locomotive in the running process.
Although the image detection mode is used as a technical means of non-contact measurement, the method has the characteristics of rapidness, stability, safety, effectiveness and visual measurement result, and can be used for independently and objectively monitoring and recording the pantograph and the action relationship between the pantograph and a contact network. However, most of the current vehicle-mounted pantograph image monitoring devices are installed on an electric locomotive in operation, then a remote wireless fixed-point monitoring system is used for shooting and recording the condition of the pantograph, and then collected data are remotely transmitted back, so that relevant technicians judge the condition of the pantograph by observing pictures and playing back the recorded data, and the judgment is still carried out based on the experience of the technicians, and the problems of misjudgment or missed judgment and the like easily occur. In addition, in the image detection method, related personnel perform off-line analysis on the shot image, and the analysis result cannot be timely transmitted to the guidance operator on the vehicle-mounted device for decision making, so that hidden danger is buried in the safe and stable operation of the electric locomotive.
In addition, most of image detection systems applied to electric locomotives monitor pantograph slide plates, and no research on foreign matter invasion of the whole pantograph-catenary contact area is involved, so that the invasion of foreign matters into the pantograph-catenary action area has serious consequences and even threatens driving safety. Therefore, how to perform real-time online monitoring on the pantograph state in the operation process of the electric locomotive and automatically and accurately find whether foreign matters invade a pantograph working area is a subject which needs to be researched urgently by technical personnel in the field so as to prompt alarm information in time and effectively avoid unexpected driving safety accidents.
Disclosure of Invention
In order to solve the problems of poor automation and real-time performance of the conventional pantograph fault detection method and incapability of carrying out foreign matter intrusion detection on the whole pantograph-catenary contact area, the invention aims to provide a pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium, which can be applied to an electric locomotive, can carry out real-time online monitoring on the state of a pantograph in the locomotive running process, can automatically and accurately find whether foreign matters intrude into a pantograph working area, can be favorable for carrying out omnibearing online monitoring on the pantograph working area, and can provide guarantee for safe and stable running of the electric locomotive.
In a first aspect, the present invention provides a pantograph foreign object intrusion detection method, including:
acquiring a video image acquired by a monitoring camera in real time, wherein the monitoring camera is mounted on the roof of the vehicle and enables the view field of a lens to cover the area where the pantograph is located;
carrying out ROI (region of interest) positioning processing on the video image, and extracting a pantograph gray image;
segmenting the foreground and the background of the pantograph gray image by using a Gaussian mixture model to obtain a foreground object binary image, wherein the gray values of foreground pixel points in the foreground object binary image are uniform non-zero values, and the gray values of background pixel points in the foreground object binary image are zero values;
carrying out moving target detection processing on three frames of pantograph gray images with continuous collection time sequence by using a three-frame difference method to obtain a moving target binary image, wherein the three frames of pantograph gray images comprise the pantograph gray images and two frames of pantograph gray images with the collection time sequences positioned in front of the pantograph gray images, the gray values of moving target pixel points in the moving target binary image are uniform non-zero values, and the gray values of background pixel points in the moving target binary image are zero values;
carrying out image logic and operation processing on the foreground object binary image and the moving target binary image to obtain a new binary image;
and carrying out contour detection processing on the new binary image, and determining a target contour which is obtained by detection and has an enclosing area exceeding a preset area threshold value in the contour as an invading foreign body contour, wherein the target contour refers to a closed contour enclosed by a plurality of adjacent edge pixel points in the new binary image, and the edge pixel points refer to pixel points which have non-zero gray values and at least one adjacent pixel point in eight adjacent pixel points around and have a gray value of zero.
Based on the content of the invention, a video image acquired in real time on the roof can be identified and positioned based on a region-of-interest ROI positioning technology to extract a pantograph region, then on one hand, a Gaussian mixture model is used for segmenting the foreground and the background of the pantograph gray image to obtain a foreground object binary image, on the other hand, a three-frame difference method is used for detecting a moving target of the pantograph gray image to obtain a moving target binary image, and finally, the logic and operation results are carried out on the two binary images, and whether an invasive foreign object exists in the operation result is determined by combining a contour area threshold value, so that the method can be applied to an electric locomotive, and can carry out real-time online monitoring on the state of the pantograph in the running process of the locomotive, automatically and accurately find whether the foreign object invades a pantograph working region, and further can be beneficial to carrying out omnibearing online monitoring on the pantograph region, and an alarm prompt is sent out in time, so that the safe and stable operation of the electric locomotive is guaranteed.
In one possible design, performing ROI positioning on the video image to extract a pantograph gray image includes:
sliding screenshot windows in the transverse direction and the longitudinal direction of the video image respectively by preset step lengths, and intercepting to obtain a plurality of video sub-images with the standard size of the sample;
respectively extracting corresponding HOG (histogram of oriented gradient) features of the HOG for each video sub-image in the plurality of video sub-images;
for each video subimage in the multiple video subimages, introducing the HOG feature of the corresponding direction gradient histogram into a Support Vector Machine (SVM) classification model which is trained based on the HOG feature of a positive sample and a negative sample, judging whether the corresponding video subimage contains a pantograph graph or not, and recording the position of the area of the video subimage in the video image when the video subimage is judged to contain the pantograph graph, wherein the positive sample in the positive sample and the negative sample refers to a sample image which has a sample standard size and contains the pantograph graph, and the negative sample in the positive sample and the negative sample refers to a sample image which has a sample standard size and does not contain the pantograph graph;
according to the position of the area, intercepting a pantograph image with a sample standard size from the video image;
and carrying out graying processing on the pantograph image to obtain the pantograph grayscale image.
In one possible design, for each of the plurality of video sub-images, extracting a corresponding histogram of oriented gradients HOG feature respectively includes:
carrying out gamma correction processing on the video sub-image to obtain a new video sub-image;
respectively calculating to obtain corresponding transverse gradient components and longitudinal gradient components aiming at each pixel point in the new video subimage;
respectively calculating corresponding gradient amplitude and gradient direction angle according to corresponding transverse gradient component and longitudinal gradient component aiming at each pixel point in the new video subimage;
dividing the new video subimage into a plurality of cell units;
respectively performing histogram statistical processing on all corresponding pixel points on each angle subinterval in a q-dimension subinterval to obtain a corresponding q-dimension feature vector for each cell unit in the plurality of cell units, wherein q represents a positive integer greater than 5, and the q-dimension subinterval is an angle interval [ -90 ]o,90o]Performing q equal division on all angle subintervals, and performing statistics on the histogram in a mode of accumulating the gradient amplitude of a pixel point when the gradient direction angle of the pixel point belongs to the angle subintervals;
splicing p adjacent cell units in the cell units to form a block, and connecting q-dimensional feature vectors of all the cell units in the p cell units in series to form p multiplied by q-dimensional feature vectors of the block;
scanning in the horizontal direction and the vertical direction of the video image by taking the size of the cell unit as a step length respectively to obtain a plurality of blocks and p × q-dimensional feature vectors corresponding to the blocks in the plurality of blocks;
and connecting p × q dimensional feature vectors of the blocks in series to form HOG features corresponding to the video sub-image.
In one possible design, performing foreground and background segmentation processing on the pantograph gray level image by using a gaussian mixture model to obtain a foreground object binary image, including:
the acquisition timing sequence is positioned before the gray level image of the pantograph and is continuous
Figure 405692DEST_PATH_IMAGE001
A frame pantograph gray scale image, wherein,
Figure 54848DEST_PATH_IMAGE002
represents a positive integer greater than 10;
according to the above
Figure 746860DEST_PATH_IMAGE002
Frame pantograph gray image, extracting gray value of pixel point at each pixel position
Figure 936402DEST_PATH_IMAGE003
Wherein, in the step (A),
Figure 866312DEST_PATH_IMAGE004
indicates that is between
Figure 827315DEST_PATH_IMAGE005
A positive integer between (a) and (b),
Figure 993241DEST_PATH_IMAGE006
is shown in
Figure 799523DEST_PATH_IMAGE002
Frame pantograph gray scaleThe first in the image arranged from the morning to the evening according to the acquisition time sequence
Figure 849519DEST_PATH_IMAGE007
The acquisition time of the frame pantograph gray image,
Figure 715844DEST_PATH_IMAGE008
is shown in the first
Figure 772662DEST_PATH_IMAGE007
In the frame pantograph gray image
Figure 382634DEST_PATH_IMAGE009
The gray value of the pixel point at each pixel location,
Figure 536404DEST_PATH_IMAGE010
indicates that is between
Figure 245734DEST_PATH_IMAGE011
A positive integer between (a) and (b),
Figure 993110DEST_PATH_IMAGE012
is shown in
Figure 531408DEST_PATH_IMAGE002
Total number of pixel positions in the frame pantograph gray image;
to the said
Figure 415050DEST_PATH_IMAGE002
Each pixel position in the frame pantograph gray image updates the corresponding gaussian mixture model according to the following steps S331 to S338:
s331. pair
Figure 764123DEST_PATH_IMAGE007
And is in the range of
Figure 126359DEST_PATH_IMAGE010
Each Gaussian distributed in the Gaussian mixture model corresponding to each pixel positionThe weight, the mean, and the variance are initialized, respectively, and then a step S332 is performed, in which,
Figure 343714DEST_PATH_IMAGE007
is initialized to 1, and the weight is initialized to
Figure 957229DEST_PATH_IMAGE013
Figure 601837DEST_PATH_IMAGE014
Is shown in the specification and
Figure 448439DEST_PATH_IMAGE010
the total number of Gaussian distributions in the Gaussian mixture model corresponding to each pixel position;
s332, judging a first matching condition
Figure 203905DEST_PATH_IMAGE015
If yes, go to step S333, otherwise go to step S334, wherein,
Figure 937506DEST_PATH_IMAGE016
is represented by the second
Figure 877649DEST_PATH_IMAGE010
The first pixel position corresponds to
Figure 821334DEST_PATH_IMAGE017
The current mean of the gaussian distribution of the number,
Figure 787016DEST_PATH_IMAGE018
represents the first
Figure 624391DEST_PATH_IMAGE019
The current variance of the gaussian distribution is calculated,
Figure 610802DEST_PATH_IMAGE019
indicates that is between
Figure 448308DEST_PATH_IMAGE020
A positive integer between (a) and (b),
Figure 927001DEST_PATH_IMAGE021
represents a positive number greater than or equal to 2.5;
s333, updating the second step according to the following formula
Figure 900773DEST_PATH_IMAGE022
Weight, mean and variance of the individual gaussian distributions:
Figure 182718DEST_PATH_IMAGE023
in the formula (I), the compound is shown in the specification,
Figure 100996DEST_PATH_IMAGE024
represents the first
Figure 877322DEST_PATH_IMAGE022
The updated weights of the gaussian distributions,
Figure 299076DEST_PATH_IMAGE025
represents the first
Figure 751923DEST_PATH_IMAGE022
The current weight of the gaussian distribution is given,
Figure 891917DEST_PATH_IMAGE026
represents the first
Figure 471934DEST_PATH_IMAGE022
The updated mean of the individual gaussian distributions,
Figure 669566DEST_PATH_IMAGE027
represents the first
Figure 168681DEST_PATH_IMAGE022
The updated variance of the gaussian distribution,
Figure 671337DEST_PATH_IMAGE028
a preset learning update rate of the weight value is represented,
Figure 914100DEST_PATH_IMAGE029
a learning update rate representing the mean value and having
Figure 438010DEST_PATH_IMAGE030
Figure 108025DEST_PATH_IMAGE031
Indicating a matrix transpose symbol, and then performing step S337;
s334, updating the first step according to the following formula
Figure 97978DEST_PATH_IMAGE022
Weight of each gaussian distribution:
Figure 144432DEST_PATH_IMAGE032
in the formula (I), the compound is shown in the specification,
Figure 785497DEST_PATH_IMAGE033
represents the first
Figure 626414DEST_PATH_IMAGE022
The updated weights of the gaussian distributions,
Figure 369242DEST_PATH_IMAGE034
a preset learning update rate representing the weight value, and then step S335 is performed;
s335, judging whether the first matching condition is matched with the second matching condition
Figure 78441DEST_PATH_IMAGE010
In a Gaussian mixture model corresponding to individual pixel positions
Figure 449380DEST_PATH_IMAGE035
If the Gaussian distributions are not satisfied, executing step S336 if the Gaussian distributions are not satisfied, otherwise executing step S337;
S336. rejecting the
Figure 867723DEST_PATH_IMAGE035
The Gaussian distribution with the minimum current weight in the Gaussian distributions is added with a new Gaussian distribution to obtain a new Gaussian distribution
Figure 815956DEST_PATH_IMAGE035
Gaussian distribution, and then performing step S337 in which the mean value of the new gaussian distribution is initialized to
Figure 204212DEST_PATH_IMAGE036
;
S337, in the pair of
Figure 305023DEST_PATH_IMAGE010
Normalizing the weight values of the Gaussian distributions in the Gaussian mixture model corresponding to each pixel position to enable the sum of the new weight values of all the Gaussian distributions to be equal to 1, and then executing the step S338;
s338, judgment
Figure 487743DEST_PATH_IMAGE037
Is equal to
Figure 457361DEST_PATH_IMAGE002
If yes, the updating of the Gaussian mixture model is ended, otherwise, the method is right
Figure 383728DEST_PATH_IMAGE038
Updating by adding 1, and then returning to execute the step S332;
for each pixel position, according to a ratio
Figure 339046DEST_PATH_IMAGE039
All the Gaussian distributions in the corresponding Gaussian mixture models are sequentially arranged from small to large to obtain corresponding Gaussian distribution queues, wherein,
Figure 958246DEST_PATH_IMAGE040
the current weight value representing the gaussian distribution,
Figure 146651DEST_PATH_IMAGE041
representing the current variance of the gaussian distribution;
aiming at each pixel position, determining the corresponding and front-ranked pixel position in the corresponding Gaussian distribution queue
Figure 752076DEST_PATH_IMAGE042
A gaussian distribution in which, among others,
Figure 952113DEST_PATH_IMAGE043
taking values according to the following formula:
Figure 866848DEST_PATH_IMAGE044
in the formula (I), the compound is shown in the specification,
Figure 293281DEST_PATH_IMAGE045
expressed in equation in parentheses
Figure 561452DEST_PATH_IMAGE046
Reach more than the preset weight threshold
Figure 740629DEST_PATH_IMAGE047
Is on the positive integer variable
Figure 701632DEST_PATH_IMAGE048
The value of the process is taken as the value,
Figure 349782DEST_PATH_IMAGE049
indicates that is between
Figure 421643DEST_PATH_IMAGE050
A positive integer between (a) and (b),
Figure 989415DEST_PATH_IMAGE051
indicates that the queue number in the corresponding Gaussian distribution queue is
Figure 731106DEST_PATH_IMAGE052
The current weight of the gaussian distribution;
determining a second matching condition for each pixel position
Figure 256766DEST_PATH_IMAGE053
For before corresponding
Figure 256952DEST_PATH_IMAGE043
Whether any Gaussian distribution in the Gaussian distributions is established or not is judged, if yes, the corresponding pixel point in the pantograph gray level image is judged to be a background pixel point, otherwise, the corresponding pixel point is judged to be a foreground pixel point, wherein,
Figure 161454DEST_PATH_IMAGE054
representing the gray values of corresponding pixel points in the pantograph gray image,
Figure 198680DEST_PATH_IMAGE055
is shown before corresponding
Figure 70690DEST_PATH_IMAGE043
The current mean of the gaussian distribution in the gaussian distribution,
Figure 749933DEST_PATH_IMAGE056
is shown before corresponding
Figure 508941DEST_PATH_IMAGE043
A current variance of the Gaussian distribution in the Gaussian distribution;
and performing image binarization processing on the pantograph gray level image according to the judgment result of the pixel point type to obtain a foreground object binarization image.
In one possible design, the method for detecting a moving object by using a three-frame difference method to acquire three continuous-time-sequence three-frame pantograph gray level images to obtain a moving object binary image comprises the following steps:
respectively calculating to obtain a first difference image of a first two-frame pantograph gray image in the three-frame pantograph gray image and a second difference image of a second two-frame pantograph gray image in the three-frame pantograph gray image, wherein the gray value of each pixel point in the first difference image is the absolute value of the gray value difference of the corresponding pixel point in the first two-frame pantograph gray image, and the gray value of each pixel point in the second difference image is the absolute value of the gray value difference of the corresponding pixel point in the second two-frame pantograph gray image;
respectively carrying out image binarization processing on the first differential image and the second differential image according to the comparison result of the gray value of the pixel point and a preset threshold value to obtain corresponding differential binarization images;
and performing image logic and operation processing on the two differential binarization images obtained through the image binarization processing to obtain the moving target binarization image.
In one possible design, before performing image logical and operation processing on the foreground object binary image and the moving target binary image, the method further includes:
and carrying out morphological open operation processing on the foreground object binary image and/or the moving target binary image to obtain a new moving target binary image and/or a new moving target binary image for carrying out the logic and operation processing.
In a second aspect, the invention provides a pantograph foreign matter intrusion detection device, which comprises an image acquisition module, an image positioning module, a foreground segmentation module, a motion detection module, an operation processing module and a contour detection module;
the image acquisition module is used for acquiring a video image acquired by a monitoring camera in real time, wherein the monitoring camera is arranged on the roof of the vehicle and enables the view field of the lens to cover the area where the pantograph is located;
the image positioning module is in communication connection with the image acquisition module and is used for performing ROI (region of interest) positioning processing on the video image and extracting a pantograph gray image;
the foreground segmentation module is in communication connection with the image positioning module and is used for performing foreground and background segmentation processing on the pantograph gray level image by using a Gaussian mixture model to obtain a foreground object binary image, wherein gray values of foreground pixel points in the foreground object binary image are uniform non-zero values, and gray values of background pixel points in the foreground object binary image are zero values;
the motion detection module is in communication connection with the image positioning module and is used for performing motion target detection processing on three frames of pantograph gray level images with continuous collection time sequence by using a three-frame difference method to obtain a motion target binary image, wherein the three frames of pantograph gray level images comprise the pantograph gray level images and two frames of pantograph gray level images with collection time sequences positioned in front of the pantograph gray level images, the gray level values of motion target pixel points in the motion target binary image are uniform non-zero values, and the gray level value of background pixel points in the motion target binary image is zero;
the operation processing module is respectively in communication connection with the foreground segmentation module and the motion detection module and is used for carrying out image logic and operation processing on the foreground object binary image and the motion target binary image to obtain a new binary image;
the contour detection module is in communication connection with the operation processing module and is used for performing contour detection processing on the new binary image and determining a target contour which is obtained through detection and has an enclosing area exceeding a preset area threshold value in the contour as an invading foreign body contour, wherein the target contour refers to a closed contour which is enclosed by a plurality of adjacent edge pixel points in the new binary image, and the edge pixel points refer to pixel points which have a gray value of non-zero value and at least one adjacent pixel point in eight adjacent pixel points around and have a gray value of zero value.
In a third aspect, the present invention provides a computer device, comprising a memory, a processor and a transceiver, which are sequentially connected in communication, wherein the memory is used for storing a computer program, the transceiver is used for transceiving data, and the processor is used for reading the computer program and executing the method according to the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the invention provides a pantograph foreign matter intrusion detection system, which comprises a monitoring camera, a processing device, a vehicle-mounted host, a display and an alarm prompting device, wherein the monitoring camera is in communication connection with the processing device, the processing device is in communication connection with the vehicle-mounted host, and the vehicle-mounted host is in communication connection with the display and the alarm prompting device respectively;
the monitoring camera is used for being installed on the roof of the vehicle, enabling the lens view to cover the area where the pantograph is located, and transmitting the video image acquired in real time to the processing equipment;
the processing device is configured to execute the method according to any one of the first aspect or possible designs of the first aspect, obtain a foreign object intrusion detection result, and transmit the video image and the foreign object intrusion detection result to the vehicle-mounted host;
the vehicle-mounted host is used for storing the video image and the foreign object intrusion detection result, distributing and transmitting the video image and the foreign object intrusion detection result to the display, and sending an alarm trigger instruction to the alarm prompting device when judging that a foreign object intrusion condition is met according to the foreign object intrusion detection result;
the display is used for displaying the video image and the foreign object intrusion detection result;
and the alarm prompting device is used for starting an alarm prompting action when the alarm triggering instruction is received.
In a fifth aspect, the present invention provides a storage medium having stored thereon instructions for performing the method as described above in the first aspect or any one of the possible designs of the first aspect when the instructions are run on a computer.
In a sixth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described above in the first aspect or any one of the possible designs of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a pantograph foreign matter intrusion detection method provided by the present invention.
Fig. 2 is an exemplary diagram of a video image captured by a monitoring camera provided by the present invention.
Fig. 3 is a schematic flow chart illustrating a foreground and a background segmentation process performed in the pantograph foreign object intrusion detection method according to the present invention.
Fig. 4 is a schematic structural diagram of a pantograph foreign matter intrusion detection device provided by the present invention.
Fig. 5 is a schematic structural diagram of a computer device provided by the present invention.
Fig. 6 is a schematic structural diagram of a pantograph foreign object intrusion detection system provided by the present invention.
In the above drawings: 1-a monitoring camera; 2-a processing device; 3-vehicle host computer; 4-a display; 5-an alarm prompting device; 61-a roof power supply; 62-an in-vehicle power supply; 7-a light supplement lamp; 100-pantograph.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely representative of exemplary embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly, a second object may be referred to as a first object, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone or A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists singly or A and B exist simultaneously; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
As shown in fig. 1 to 3, the pantograph foreign object intrusion detection method provided in the first aspect of the present embodiment may be, but is not limited to being, executed by a computer device disposed on an electric locomotive and communicatively connected to a roof monitoring camera. The method for detecting intrusion of a pantograph foreign object may include, but is not limited to, the following steps S1 to S6.
S1, acquiring a video image acquired by a monitoring camera in real time, wherein the monitoring camera is installed on the roof of the vehicle and enables the view field of a lens to cover the area where the pantograph is located.
In step S1, since the lens view covers the area where the pantograph is located, a complete pantograph image can be acquired in the obtained video image, as shown in fig. 2. In addition, the computer equipment is in communication connection with the monitoring camera, so that the video images can be transmitted in real time after being collected.
And S2, carrying out Region Of Interest (ROI) positioning processing on the video image, namely, in machine vision or image processing, delineating a Region to be processed in a mode Of a square frame, a circle, an ellipse or an irregular polygon and the like aiming at the processed image, namely, the Region is called as a Region Of Interest), and extracting a pantograph gray image.
In step S2, the ROI positioning process is to identify a pantograph pattern in the video image, outline a region of the pantograph pattern in a square frame, a circle, an ellipse, or an irregular polygon, and finally intercept the pantograph grayscale image from the video image according to a positioning result of the region. Preferably, the ROI positioning process is performed on the video image to extract the pantograph gray scale image, which includes, but is not limited to, the following steps S21 to S25.
And S21, sliding screenshot windows in the horizontal direction and the longitudinal direction of the video image respectively by preset step lengths, and intercepting to obtain a plurality of video sub-images with the standard size of the sample.
In step S21, the horizontal direction refers to an X-axis direction on an XOY coordinate plane of the video image, and the vertical direction refers to a Y-axis direction on the XOY coordinate plane of the video image, and the sliding steps of the screenshot window in the horizontal direction and the vertical direction may be the same or different. The screenshot window can be square or rectangular, the size of the screenshot window is equal to the standard size of the sample, so that a video subimage with the standard size of the sample can be obtained by means of interception, the size of the video subimage is consistent with the size of positive and negative samples of a Support Vector Machine (SVM) classification model after subsequent training is finished, and accuracy of a classification result is guaranteed.
And S22, respectively extracting corresponding HOG (histogram of oriented gradient) features of the direction gradient aiming at each video sub-image in the plurality of video sub-images.
In step S22, the histogram of hog (histogram of Oriented gradient) is a feature descriptor used for object detection in computer vision and image processing. The HOG features are formed by calculating and counting a gradient direction histogram of a local area of an image, are commonly used features in the field of computer vision and recognition at present, and are mainly used for describing the distribution condition of the gradient intensity and the direction of the image. Further preferably, the steps of extracting the corresponding histogram of oriented gradients HOG features for each of the plurality of video sub-images include, but are not limited to, the following steps S221 to S228.
S221, carrying out gamma correction processing on the video sub-image to obtain a new video sub-image.
In step S221, the gamma correction, also called gamma non-linearization or gamma encoding, is used to perform non-linear operation or inverse operation on the luminance of light or tristimulus values in a film or image system, so as to make the gray-level values of the pixels have a smaller storage range (0-255) and a more balanced ratio of bright and dark portions, and thus the new video sub-image beneficial to subsequent HOG feature extraction can be obtained through the conventional gamma correction processing. Specifically, the gamma correction process may be performed according to the following formula:
Figure 982648DEST_PATH_IMAGE057
in the formula (I), the compound is shown in the specification,
Figure 76375DEST_PATH_IMAGE058
indicating a pixel position in said video sub-picture
Figure 28150DEST_PATH_IMAGE059
The gray value of the upper pixel point is,
Figure 172824DEST_PATH_IMAGE060
indicating a pixel position in said new video sub-image
Figure 222293DEST_PATH_IMAGE061
The gray value of the upper pixel point is,
Figure 819628DEST_PATH_IMAGE062
a lateral coordinate value indicating a pixel position,
Figure 965307DEST_PATH_IMAGE063
a longitudinal coordinate value indicating a pixel position,
Figure 89121DEST_PATH_IMAGE064
represents the gamma correction parameters, which can be taken as
Figure 45576DEST_PATH_IMAGE065
And S222, respectively calculating to obtain corresponding transverse gradient components and longitudinal gradient components aiming at each pixel point in the new video subimage.
In the step S222, the calculation method of the transverse gradient component and the longitudinal gradient component is the conventional method, i.e. for the position of the pixel
Figure 910632DEST_PATH_IMAGE061
Pixel of (2), its corresponding transverse gradient component
Figure 345156DEST_PATH_IMAGE066
And a longitudinal gradient component
Figure 448110DEST_PATH_IMAGE067
The calculation can be made according to the following formula:
Figure 434521DEST_PATH_IMAGE068
in the formula (I), the compound is shown in the specification,
Figure 6447DEST_PATH_IMAGE069
indicating a pixel position in said new video sub-image
Figure 231280DEST_PATH_IMAGE070
The gray value of the upper pixel point is,
Figure 798527DEST_PATH_IMAGE071
indicating a pixel position in said new video sub-image
Figure 831205DEST_PATH_IMAGE072
The gray value of the upper pixel point is,
Figure 749483DEST_PATH_IMAGE073
indicating a pixel position in said new video sub-image
Figure 509497DEST_PATH_IMAGE074
The gray value of the upper pixel point is,
Figure 196831DEST_PATH_IMAGE075
indicating a pixel position in said new video sub-image
Figure 400410DEST_PATH_IMAGE076
The gray value of the pixel point.
And S223, aiming at each pixel point in the new video subimage, respectively calculating to obtain a corresponding gradient amplitude and a corresponding gradient direction angle according to the corresponding transverse gradient component and the corresponding longitudinal gradient component.
In step S223, the gradient magnitude and gradient direction angle are calculated by the conventional method, i.e. for the pixel position
Figure 540404DEST_PATH_IMAGE061
The corresponding gradient amplitude of the pixel point
Figure 635268DEST_PATH_IMAGE077
And angle of gradient direction
Figure 318053DEST_PATH_IMAGE078
The calculation can be made according to the following formula:
Figure 82747DEST_PATH_IMAGE079
in the formula (I), the compound is shown in the specification,
Figure 100251DEST_PATH_IMAGE080
representing the arctan function.
S224, dividing the new video subimage into a plurality of cell units.
In step S224, the cell unit is a unit image, and the shape of the cell unit may be a square or a rectangle.
S225, aiming at each cell unit in the cell units, respectively carrying out histogram statistical processing on all corresponding pixel points in each angle subinterval in a q-dimension subinterval to obtain a corresponding q-dimension feature vector, wherein q represents a positive integer greater than 5, and the q-dimension subinterval is an angle interval [ -90 ]o,90o]And performing q equal division on all the angle subintervals, and performing statistics on the histogram in a mode of accumulating the gradient amplitude of the pixel point when the gradient direction angle of the pixel point belongs to the angle subintervals.
In step S225, q may preferably take a value of 9, that is, the angle range of each angle subinterval is 20oThe q angle subintervals are respectively [ -90 ]o,-70o]、[-70o,-50o]、[-50o,-30o]、[-30o,-10o]、[-10o,10o]、[10o,30o]、[30o,50o]、[50o,70o]And [70 ]o,90o]Thus, for each cell unit, a corresponding one of the 9-dimensional feature vectors is obtained. In addition, the histogram statistical processing may be exemplified by: for the 6 th-dimension vector in the 9-dimension feature vectors, the corresponding angle subinterval is [10 ]o,30o]If the gradient direction angle of 356 pixels belongs to the angular sub-interval [10 ]o,30o]And taking the accumulated result of the gradient amplitudes of the 356 pixel points as the value on the 6 th-dimension vector.
S226, splicing p adjacent cell units in the cell units into a block, and connecting the q-dimensional feature vectors of the cell units in the p cell units in series to form the p × q-dimensional feature vector of the block.
In step S226, the value of p may also be preferably 9, that is, the upper left cell unit, the upper right cell unit, the lower left cell unit, the right left cell unit, and the central cell unit are spliced to form the block, and the 9-dimensional eigenvectors of the nine cell units are connected in series from top to bottom in the matrix to form the 9 × 9-dimensional eigenvector.
And S227, scanning in the horizontal direction and the vertical direction of the video image by taking the size of the cell unit as a step length respectively to obtain a plurality of blocks and p × q-dimensional feature vectors corresponding to the blocks in the plurality of blocks.
In step S227, a specific manner of obtaining the p × q-dimensional feature vector of each of the blocks may refer to step S226, which is not described herein again.
And S228, connecting p × q dimensional feature vectors of the blocks in series to form HOG features corresponding to the video sub-image.
And S23, aiming at each video sub-image in the plurality of video sub-images, introducing the corresponding HOG feature of the direction gradient histogram into a SVM classification model trained based on the HOG feature of a positive sample and a negative sample, judging whether the corresponding video sub-image contains a pantograph graph or not, and recording the position of the area of the video sub-image in the video image when the video sub-image contains the pantograph graph, wherein the positive sample in the positive sample and the negative sample refers to a sample image with a sample standard size and containing the pantograph graph, and the negative sample in the positive sample and the negative sample refers to a sample image with a sample standard size and containing no pantograph graph.
In the step S23, the way of acquiring the HOG features of each sample image in the positive and negative samples may refer to the aforementioned steps S221 to S228, which are not described herein again. The more the number of positive samples and the number of negative samples in the positive and negative samples are, the better, and the balance is required as much as possible. An SVM (support Vector machine) is a generalized linear classifier for binary classification of data according to a supervised learning mode, a decision boundary of the SVM is a maximum margin hyperplane solved for a learning sample, and the SVM is a general method for machine learning under a limited training sample. Furthermore, the sample images of the positive and negative samples are preferably from a video segment captured by the surveillance camera.
And S24, intercepting a pantograph image with a sample standard size from the video image according to the position of the area.
In the step S24, if it is determined in the step S23 that there are a plurality of location area positions, a pantograph image having a sample standard size may be cut out from the video image with a center position of the plurality of location area positions being a cut-out center point of the pantograph image.
And S25, carrying out gray processing on the pantograph image to obtain the pantograph gray image.
In step S25, if the video image is a grayscale image, the graying process may be skipped, and the pantograph grayscale image may be directly extracted from the video image.
And S3, segmenting the foreground and the background of the pantograph gray image by using a Gaussian mixture model to obtain a foreground object binary image, wherein the gray value of a foreground pixel point in the foreground object binary image is a uniform non-zero value, and the gray value of a background pixel point in the foreground object binary image is a zero value.
In step S3, the Gaussian Mixed Model (GMM) refers to a linear combination of multiple Gaussian distribution functions, and thus includes multiple Gaussian distributions. The segmentation processing idea of the foreground and the background is to use a Gaussian mixture model GMM to perform background modeling on the pantograph gray level image so as to realize the segmentation of the foreground and the background of the image, namely, a background representation method based on pixel sample statistical information is used, the background is represented by using statistical information such as probability density of a large number of sample values of pixels in a long time, and a statistical difference criterion is used for judging a pixel target so as to separate the foreground from the background. The Gaussian mixture model is a classical description of statistical information of pixel samples, and in the Gaussian mixture model, color information among pixels is considered to be irrelevant, and processing of each pixel point is independent. Therefore, for each pixel point in the video image, the change of the value in the sequence image (i.e. the multi-frame video image with continuous acquisition time sequence) can be regarded as a random process which continuously generates the pixel value, i.e. the color rendering rule of each pixel point is described by gaussian distribution. For each pixel point in the image, the gaussian mixture model can represent the characteristics of each pixel point in the image by superposition of a plurality of gaussian distributions with different weights, wherein the probability formula of each pixel point can be expressed as follows:
Figure 218379DEST_PATH_IMAGE081
in the formula (I), the compound is shown in the specification,
Figure 880305DEST_PATH_IMAGE082
indicates that is between
Figure 677884DEST_PATH_IMAGE083
A positive integer between (a) and (b),
Figure 933416DEST_PATH_IMAGE084
representing the total number of pixel locations in the image,
Figure 714290DEST_PATH_IMAGE085
the representation is located at the second place in the image
Figure 355356DEST_PATH_IMAGE086
The gray value of the pixel point at each pixel location,
Figure 196273DEST_PATH_IMAGE087
is indicated to be located at the second
Figure 673522DEST_PATH_IMAGE088
The gray value of the pixel point at each pixel position is
Figure 258087DEST_PATH_IMAGE089
The probability of (a) of (b) being,
Figure 753659DEST_PATH_IMAGE090
represents the total number of gaussian distributions in the gaussian mixture model,
Figure 31057DEST_PATH_IMAGE091
indicates that is between
Figure 730022DEST_PATH_IMAGE092
A positive integer between (a) and (b),
Figure 508492DEST_PATH_IMAGE093
is represented by the second
Figure 733936DEST_PATH_IMAGE086
Pixel point correspondences at individual pixel positions and in the Gaussian mixture model
Figure 57602DEST_PATH_IMAGE094
The weight of each of the gaussian distributions is calculated,
Figure 634076DEST_PATH_IMAGE095
represents the first
Figure 953587DEST_PATH_IMAGE094
The mean of the number of gaussian distributions,
Figure 174484DEST_PATH_IMAGE096
represents the first
Figure 528105DEST_PATH_IMAGE094
A covariance matrix of Gaussian distribution
Figure 450930DEST_PATH_IMAGE097
Figure 915410DEST_PATH_IMAGE098
Represents the first
Figure 256392DEST_PATH_IMAGE094
The variance of the individual gaussian distributions,
Figure 46494DEST_PATH_IMAGE099
the unit matrix is represented by a matrix of units,
Figure 191036DEST_PATH_IMAGE100
represents the first
Figure 724786DEST_PATH_IMAGE094
A probability density function of a Gaussian distribution having
Figure 654695DEST_PATH_IMAGE101
In the formula (I), the compound is shown in the specification,
Figure 740332DEST_PATH_IMAGE102
to represent
Figure 513116DEST_PATH_IMAGE103
The dimension(s) of (a) is,
Figure 460343DEST_PATH_IMAGE104
the base of the natural logarithm is represented,
Figure 634973DEST_PATH_IMAGE105
representing the matrix transpose symbol. Therefore, for the Gaussian mixture model, the model can be built according to superposition of a plurality of Gaussian distributions of different weights for each pixel point in the image, each Gaussian distribution corresponds to a state which can possibly generate the color presented by the pixel point, and the weight and the distribution parameters of each Gaussian distribution can be updated along with time so as to gradually model the background。
In the step S3, it is preferable that the segmentation process of the foreground and the background is performed on the pantograph gray scale image by using a gaussian mixture model to obtain a binary image of the foreground object, including, but not limited to, the following steps S31 to S37.
S31, acquiring a continuous acquisition time sequence positioned before the pantograph gray level image
Figure 882722DEST_PATH_IMAGE002
A frame pantograph gray scale image, wherein,
Figure 877222DEST_PATH_IMAGE002
representing a positive integer greater than 10.
In the step S31, the
Figure 628141DEST_PATH_IMAGE002
The manner of acquiring each frame of pantograph gray scale image in the frame pantograph gray scale image can be referred to in the foregoing steps S1 to S2, which are all from the real-time video collected by the monitoring camera.
S32, according to the
Figure 657277DEST_PATH_IMAGE002
Frame pantograph gray image, extracting gray value of pixel point at each pixel position
Figure 819137DEST_PATH_IMAGE106
Wherein, in the step (A),
Figure 566513DEST_PATH_IMAGE107
indicates that is between
Figure 855543DEST_PATH_IMAGE108
A positive integer between (a) and (b),
Figure 739185DEST_PATH_IMAGE109
is shown in
Figure 337525DEST_PATH_IMAGE002
The first arranged in the gray level image of the frame pantograph from the morning to the evening according to the acquisition time sequence
Figure 713143DEST_PATH_IMAGE038
The acquisition time of the frame pantograph gray image,
Figure 55132DEST_PATH_IMAGE110
is shown in the first
Figure 793281DEST_PATH_IMAGE037
In the frame pantograph gray image
Figure 313255DEST_PATH_IMAGE010
The gray value of the pixel point at each pixel location,
Figure 769644DEST_PATH_IMAGE010
indicates that is between
Figure 259531DEST_PATH_IMAGE111
A positive integer between (a) and (b),
Figure 510908DEST_PATH_IMAGE112
is shown in
Figure 60838DEST_PATH_IMAGE002
Total number of pixel locations in the frame pantograph gray scale image.
S33, aiming at
Figure 614311DEST_PATH_IMAGE002
Each pixel position in the frame pantograph gray scale image is updated with the corresponding gaussian mixture model in the following steps S331 to S338, as shown in fig. 3.
S331. pair
Figure 173468DEST_PATH_IMAGE037
And is in the range of
Figure 10843DEST_PATH_IMAGE010
Personal portraitThe weight, mean and variance of each gaussian distribution in the gaussian mixture model corresponding to the element position are initialized, respectively, and then step S332 is performed, wherein,
Figure 138199DEST_PATH_IMAGE037
is initialized to 1, and the weight is initialized to
Figure 834759DEST_PATH_IMAGE113
Figure 322241DEST_PATH_IMAGE035
Is shown in the specification and
Figure 764855DEST_PATH_IMAGE010
and the total number of Gaussian distributions in the Gaussian mixture model corresponding to each pixel position.
In the step S331, a mean value of each gaussian distribution may be initialized to the value
Figure 187746DEST_PATH_IMAGE002
The gray value of any pixel point in the frame pantograph gray image, and the variance of each Gaussian distribution can be initialized to a larger value, such as the above
Figure 965078DEST_PATH_IMAGE002
The difference between the maximum gray value and the minimum gray value in the frame pantograph gray image.
S332, judging a first matching condition
Figure 866038DEST_PATH_IMAGE114
If yes, go to step S333, otherwise go to step S334, wherein,
Figure 428738DEST_PATH_IMAGE115
is represented by the second
Figure 756951DEST_PATH_IMAGE010
The first pixel position corresponds to
Figure 24509DEST_PATH_IMAGE022
The current mean of the gaussian distribution of the number,
Figure 463580DEST_PATH_IMAGE116
represents the first
Figure 880786DEST_PATH_IMAGE022
The current variance of the gaussian distribution is calculated,
Figure 114321DEST_PATH_IMAGE022
indicates that is between
Figure 866246DEST_PATH_IMAGE117
A positive integer between (a) and (b),
Figure 109008DEST_PATH_IMAGE118
represents a positive number greater than or equal to 2.5.
In the step S332, the first matching condition
Figure 177458DEST_PATH_IMAGE119
For judging
Figure 972108DEST_PATH_IMAGE120
Whether or not to the second
Figure 821115DEST_PATH_IMAGE019
A gaussian distribution match, wherein,
Figure 601989DEST_PATH_IMAGE121
the value can be specifically between 2.5 and 3.5, for example, 2.5.
S333, updating the second step according to the following formula
Figure 993787DEST_PATH_IMAGE019
Weight, mean and variance of the individual gaussian distributions:
Figure 834705DEST_PATH_IMAGE122
in the formula (I), the compound is shown in the specification,
Figure 295642DEST_PATH_IMAGE123
represents the first
Figure 880207DEST_PATH_IMAGE019
The updated weights of the gaussian distributions,
Figure 860932DEST_PATH_IMAGE124
represents the first
Figure 872751DEST_PATH_IMAGE019
The current weight of the gaussian distribution is given,
Figure 620651DEST_PATH_IMAGE125
represents the first
Figure 884274DEST_PATH_IMAGE019
The updated mean of the individual gaussian distributions,
Figure 375298DEST_PATH_IMAGE126
represents the first
Figure 948230DEST_PATH_IMAGE019
The updated variance of the gaussian distribution,
Figure 400071DEST_PATH_IMAGE127
a preset learning update rate of the weight value is represented,
Figure 326439DEST_PATH_IMAGE128
a learning update rate representing the mean value and having
Figure 531024DEST_PATH_IMAGE129
Figure 884645DEST_PATH_IMAGE130
Indicating that the matrix transposes the symbols, then step S337 is performed.
S334, according to the following stepsFormula updates the second
Figure 823783DEST_PATH_IMAGE022
Weight of each gaussian distribution:
Figure 553841DEST_PATH_IMAGE131
in the formula (I), the compound is shown in the specification,
Figure 144091DEST_PATH_IMAGE033
represents the first
Figure 403034DEST_PATH_IMAGE022
The updated weights of the gaussian distributions,
Figure 563888DEST_PATH_IMAGE034
a preset learning update rate indicating the weight value, and then step S335 is performed.
S335, judging whether the first matching condition is matched with the second matching condition
Figure 502499DEST_PATH_IMAGE010
In a Gaussian mixture model corresponding to individual pixel positions
Figure 557043DEST_PATH_IMAGE035
If the gaussian distributions are not satisfied, if yes, step S336 is executed, otherwise step S337 is executed.
S336. rejecting the
Figure 658991DEST_PATH_IMAGE035
The Gaussian distribution with the minimum current weight in the Gaussian distributions is added with a new Gaussian distribution to obtain a new Gaussian distribution
Figure 821988DEST_PATH_IMAGE035
Gaussian distribution, and then performing step S337 in which the mean value of the new gaussian distribution is initialized to
Figure 893849DEST_PATH_IMAGE132
In step S336, the new weight of the Gaussian distribution is initialized to a smaller value, for example
Figure 678266DEST_PATH_IMAGE113
The variance of the new Gaussian distribution may be initialized to a larger value, such as the one described
Figure 810170DEST_PATH_IMAGE002
The difference between the maximum gray value and the minimum gray value in the frame pantograph gray image.
S337, in the pair of
Figure 929304DEST_PATH_IMAGE010
The weights of the gaussian distributions in the gaussian mixture model corresponding to each pixel position are normalized to make the sum of the new weights of all the gaussian distributions equal to 1, and then step S338 is executed.
S338, judgment
Figure 945802DEST_PATH_IMAGE037
Is equal to
Figure 709359DEST_PATH_IMAGE002
If yes, the updating of the Gaussian mixture model is ended, otherwise, the method is right
Figure 136798DEST_PATH_IMAGE038
The self-added 1 update is performed, and then the execution returns to step S332.
S34, aiming at each pixel position, according to a ratio
Figure 618595DEST_PATH_IMAGE133
All the Gaussian distributions in the corresponding Gaussian mixture models are sequentially arranged from small to large to obtain corresponding Gaussian distribution queues, wherein,
Figure 907625DEST_PATH_IMAGE134
the current weight value representing the gaussian distribution,
Figure 56846DEST_PATH_IMAGE135
representing the current variance of the gaussian distribution.
S35, aiming at each pixel position, determining corresponding and front-ranked Gaussian distribution queues in corresponding Gaussian distribution queues
Figure 189275DEST_PATH_IMAGE043
A gaussian distribution in which, among others,
Figure 33734DEST_PATH_IMAGE043
taking values according to the following formula:
Figure 641302DEST_PATH_IMAGE136
in the formula (I), the compound is shown in the specification,
Figure 645030DEST_PATH_IMAGE137
expressed in equation in parentheses
Figure 430583DEST_PATH_IMAGE138
Reach more than the preset weight threshold
Figure 11606DEST_PATH_IMAGE139
Is on the positive integer variable
Figure 767073DEST_PATH_IMAGE140
The value of the process is taken as the value,
Figure 766253DEST_PATH_IMAGE052
indicates that is between
Figure 316183DEST_PATH_IMAGE141
A positive integer between (a) and (b),
Figure 650081DEST_PATH_IMAGE142
indicates that the queue number in the corresponding Gaussian distribution queue is
Figure 84605DEST_PATH_IMAGE052
Is calculated based on the current weight of the gaussian distribution.
S36, aiming at each pixel position, judging a second matching condition
Figure 62925DEST_PATH_IMAGE143
For before corresponding
Figure 176899DEST_PATH_IMAGE043
Whether any Gaussian distribution in the Gaussian distributions is established or not is judged, if yes, the corresponding pixel point in the pantograph gray level image is judged to be a background pixel point, otherwise, the corresponding pixel point is judged to be a foreground pixel point, wherein,
Figure 342301DEST_PATH_IMAGE144
representing the gray values of corresponding pixel points in the pantograph gray image,
Figure 314936DEST_PATH_IMAGE145
is shown before corresponding
Figure 882184DEST_PATH_IMAGE043
The current mean of the gaussian distribution in the gaussian distribution,
Figure 164129DEST_PATH_IMAGE146
is shown before corresponding
Figure 82407DEST_PATH_IMAGE043
The current variance of the gaussian.
In the step S36, the front
Figure 124312DEST_PATH_IMAGE043
The Gaussian distribution is a background model which corresponds to the pixel position and is gradually improved after a long time, and the second matching condition is
Figure 201858DEST_PATH_IMAGE147
For judging
Figure 671017DEST_PATH_IMAGE144
Whether the background model is matched with any Gaussian distribution in the background model or not, if so, the correspondence can be indicated
Figure 76591DEST_PATH_IMAGE144
The pixel points of (1) are background pixel points, otherwise, the pixel points can be judged as foreground pixel points.
And S37, carrying out image binarization processing on the pantograph gray level image according to the judgment result of the pixel point type to obtain a foreground object binarization image.
In the step S37, the specific manner of the image binarization processing may be, but is not limited to, assigning the gray value of the foreground pixel to be 255 (i.e. a uniform non-zero value), and assigning the gray value of the background pixel to be 0 (i.e. a zero value).
And S4, carrying out moving target detection processing on three frames of pantograph gray level images with continuous collection time sequence by using a three-frame difference method to obtain a moving target binary image, wherein the three frames of pantograph gray level images comprise the pantograph gray level images and two frames of pantograph gray level images with collection time sequences positioned in front of the pantograph gray level images, the gray level values of moving target pixel points in the moving target binary image are uniform non-zero values, and the gray level values of background pixel points in the moving target binary image are zero values.
In step S4, the three-frame difference method is an improved method of the adjacent two-frame difference algorithm, which selects three consecutive frames of video images to perform difference operation, so as to eliminate the background influence revealed by motion, thereby extracting accurate contour information of the moving object. The basic principle of the three-frame difference method is that continuous three frames of images in a video image sequence are selected, difference images of two adjacent frames are calculated respectively, then binarization processing is carried out on the difference images by selecting a proper threshold value to obtain a binarization image, and finally logic and operation are carried out on the binarization images obtained at each pixel point to obtain a common part, so that contour information of a moving target is obtained. Therefore, preferably, the motion target detection processing is performed on three frames of pantograph gray-scale images with continuous acquisition time sequence by using a three-frame difference method to obtain a motion target binary image, including, but not limited to, the following steps S41 to S43.
And S41, respectively calculating to obtain a first difference image of the previous two frames of pantograph gray images in the three frames of pantograph gray images and a second difference image of the next two frames of pantograph gray images in the three frames of pantograph gray images, wherein the gray value of each pixel point in the first difference image is the absolute value of the gray value difference of the corresponding pixel point in the previous two frames of pantograph gray images, and the gray value of each pixel point in the second difference image is the absolute value of the gray value difference of the corresponding pixel point in the next two frames of pantograph gray images.
And S42, respectively carrying out image binarization processing on the first differential image and the second differential image according to the comparison result of the gray value of the pixel point and a preset threshold value to obtain corresponding differential binarization images.
In the step S42, the specific manner of the image binarization processing may be, but is not limited to: for the pixels with the gray values greater than or equal to the preset threshold, the pixels are determined as moving target pixels (namely foreground pixels), the corresponding gray values are updated to 255 (namely a uniform non-zero value), and for the pixels with the gray values less than the preset threshold, the pixels are determined as non-moving target pixels (namely background pixels), and the corresponding gray values are updated to 0 (namely 0 zero value).
And S43, carrying out image logic and operation processing on the two differential binarization images obtained through image binarization processing to obtain the moving target binarization image.
And S5, carrying out image logic and operation processing on the foreground object binary image and the moving target binary image to obtain a new binary image.
Before the step S5, in order to remove the smaller isolated interference region and the noise point in advance, preferably, the method further includes: and carrying out morphological open operation processing on the foreground object binary image and/or the moving target binary image to obtain a new moving target binary image and/or a new moving target binary image for carrying out the logic and operation processing.
S6, carrying out contour detection processing on the new binary image, and determining a target contour which is obtained through detection and has an enclosing area exceeding a preset area threshold value in the contour as an invading foreign body contour, wherein the target contour refers to a closed contour which is enclosed by a plurality of adjacent edge pixel points in the new binary image, and the edge pixel points refer to pixel points which have a gray value of non-zero value and at least one adjacent pixel point of which the gray value is zero in eight adjacent pixel points (namely, an upper left pixel point, an upper right pixel point, a right pixel point, a lower left pixel point, a left pixel point and the like) around.
In step S6, the enclosing area in the outline may be but is not limited to the total number of the enclosed pixels, the larger the number is, the larger the area is, the more obvious the outline is, and the higher the accuracy of the invading foreign object is, so that the small target interference can be filtered by combining with the preset area threshold, and the correctness of the invading foreign object determination is ensured. In addition, if no contour is detected or no target contour with an enclosing area within the contour exceeding the preset area threshold is detected, it can be determined that no foreign object intrusion event is currently present.
Therefore, through the pantograph foreign matter intrusion detection scheme described in detail in the foregoing steps S1 to S6, the video image obtained in real time on the roof of the vehicle is identified and positioned based on the ROI positioning technique, a pantograph region is extracted, a pantograph gray-scale image is extracted, on one hand, the foreground and background segmentation is performed on the pantograph gray-scale image by using the gaussian mixture model to obtain a foreground object binary image, on the other hand, the moving object detection is performed on the pantograph gray-scale image by using the three-frame difference method to obtain a moving object binary image, and finally, the logic and operation result is performed on the two binary images, and the presence or absence of an intruding foreign matter in the operation result is determined by combining the contour area threshold, so that the pantograph foreign matter intrusion detection scheme can be applied to an electric locomotive, perform real-time online monitoring on the pantograph state during the locomotive operation process, and automatically and accurately find out whether a foreign matter intrudes into the pantograph working region, therefore, the pantograph region can be monitored online in an all-around mode, an alarm prompt is sent out in time, and the safe and stable operation of the electric locomotive is guaranteed.
As shown in fig. 4, a second aspect of this embodiment provides a virtual device for implementing the method for detecting intrusion of a pantograph foreign object according to any one of the first aspect or the first aspect, including an image acquisition module, an image positioning module, a foreground segmentation module, a motion detection module, an operation processing module, and a contour detection module;
the image acquisition module is used for acquiring a video image acquired by a monitoring camera in real time, wherein the monitoring camera is arranged on the roof of the vehicle and enables the view field of the lens to cover the area where the pantograph is located;
the image positioning module is in communication connection with the image acquisition module and is used for performing ROI (region of interest) positioning processing on the video image and extracting a pantograph gray image;
the foreground segmentation module is in communication connection with the image positioning module and is used for performing foreground and background segmentation processing on the pantograph gray level image by using a Gaussian mixture model to obtain a foreground object binary image, wherein gray values of foreground pixel points in the foreground object binary image are uniform non-zero values, and gray values of background pixel points in the foreground object binary image are zero values;
the motion detection module is in communication connection with the image positioning module and is used for performing motion target detection processing on three frames of pantograph gray level images with continuous collection time sequence by using a three-frame difference method to obtain a motion target binary image, wherein the three frames of pantograph gray level images comprise the pantograph gray level images and two frames of pantograph gray level images with collection time sequences positioned in front of the pantograph gray level images, the gray level values of motion target pixel points in the motion target binary image are uniform non-zero values, and the gray level value of background pixel points in the motion target binary image is zero;
the operation processing module is respectively in communication connection with the foreground segmentation module and the motion detection module and is used for carrying out image logic and operation processing on the foreground object binary image and the motion target binary image to obtain a new binary image;
the contour detection module is in communication connection with the operation processing module and is used for performing contour detection processing on the new binary image and determining a target contour which is obtained through detection and has an enclosing area exceeding a preset area threshold value in the contour as an invading foreign body contour, wherein the target contour refers to a closed contour which is enclosed by a plurality of adjacent edge pixel points in the new binary image, and the edge pixel points refer to pixel points which have a gray value of non-zero value and at least one adjacent pixel point in eight adjacent pixel points around and have a gray value of zero value.
For the working process, working details and technical effects of the foregoing apparatus provided in the second aspect of this embodiment, reference may be made to the method described in the first aspect or any one of the possible designs of the first aspect, which is not described herein again.
As shown in fig. 5, a third aspect of the present embodiment provides a computer device for executing the pantograph foreign object intrusion detection method according to any one of the possible designs of the first aspect or the first aspect, and the computer device includes a memory, a processor and a transceiver, which are sequentially and communicatively connected, where the memory is used for storing a computer program, the transceiver is used for transceiving data, and the processor is used for reading the computer program and executing the pantograph foreign object intrusion detection method according to any one of the possible designs of the first aspect or the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), and/or a First-in Last-out (FILO), and the like; the processor may be, but is not limited to, a microprocessor of the model number STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, working details, and technical effects of the foregoing computer device provided in the third aspect of this embodiment, reference may be made to the method in the first aspect or any one of the possible designs in the first aspect, which is not described herein again.
A fourth aspect of the present invention provides a detection system applying any one of the first aspect or the first aspect, which may be designed to the pantograph foreign object intrusion detection method, that is, as shown in fig. 6, the detection system includes a monitoring camera 1, a processing device 2, an on-board host 3, a display 4, and an alarm prompting device 5, where the monitoring camera 1 is communicatively connected to the processing device 2, the processing device 2 is communicatively connected to the on-board host 3, and the on-board host 3 is communicatively connected to the display 4 and the alarm prompting device 5, respectively; the monitoring camera 1 is used for being installed on the roof of a vehicle, enabling the lens view to cover the area where the pantograph is located, and transmitting a video image acquired in real time to the processing equipment 2; the processing device 2 is configured to execute the pantograph foreign object intrusion detection method according to any one of the first aspect and the possible designs of the first aspect, obtain a foreign object intrusion detection result, and transmit the video image and the foreign object intrusion detection result to the on-board host 3; the vehicle-mounted host 3 is used for storing the video image and the foreign object intrusion detection result, distributing and transmitting the video image and the foreign object intrusion detection result to the display 4, and sending an alarm trigger instruction to the alarm prompting device 5 when determining that a foreign object intrusion condition is met according to the foreign object intrusion detection result; the display 4 is used for displaying the video image and the foreign object intrusion detection result; and the alarm prompting device 5 is used for starting an alarm prompting action when the alarm triggering instruction is received.
As shown in fig. 6, in the specific structure of the pantograph foreign object intrusion detection system, both themonitoring camera 1 and theprocessing device 2 are disposed on the roof, wherein themonitoring camera 1 is preferably a high-definition industrial camera capable of high-quality imaging so as to transmit high-quality imaging data to theprocessing device 2; theprocessing device 2 may be implemented by the computer device provided in the third aspect of this embodiment, and a specific manner of transmitting the video image and the foreign object intrusion detection result to the on-board host 3 may be, for example: and superposing corresponding foreign object intrusion detection results and other acquisition state information on continuous image data, then obtaining a video stream by adopting an H264 compression coding mode, and finally transmitting the video stream to the vehicle-mountedhost 3 in a video stream mode. In addition, aroof power supply 61 and asupplementary lighting lamp 7 can be arranged on the roof, wherein theroof power supply 61 is used for providing electric energy support for themonitoring camera 1, theprocessing device 2, thesupplementary lighting lamp 7 and the like so that the monitoring camera, the processing device and the supplementary lighting lamp can work normally, and thesupplementary lighting lamp 7 is used for irradiating apantograph 100 of the roof so as to ensure the imaging quality of the video image and facilitate the subsequent intrusion foreign object detection processing.
On-vehicle host computer 3 thedisplay 4 with 5 equipartitions of warning suggestion device are arranged in the car, wherein, on-vehicle host computer 3 mainly used is to coming from the data completion system distribution and the storage work ofprocessing apparatus 2 can adopt conventional server equipment to realize, simultaneously it is according to whether there is the mode of foreign matter invasion for conventional mode in foreign matter invasion testing result judgement, for example after receiving the video image that marks there is the invasion foreign matter profile, judges that the foreign matter invasion condition is established. Thedisplay 4 is used for facing workers in the vehicle and visually displaying audio and video data (including but not limited to alarm prompt information generated when the condition of foreign matter invasion is determined to be established) from the vehicle-mountedhost 3 so as to provide reference for driving safety decision. The alarm prompting action of thealarm prompting device 5 can be but not limited to an audible and visual alarm action, so that the state of foreign matters on the roof is timely fed back to corresponding workers, and the harm of bow net is eliminated. In addition, an in-vehicle power source 62 may be disposed in the vehicle to provide power support for the in-vehicle host 3, thedisplay 4, thealarm prompting device 5, and the like.
For a working process, working details, and technical effects of the foregoing detection system provided in the fourth aspect of this embodiment, reference may be made to the method in the first aspect or any one of possible designs in the first aspect, which is not described herein again.
A fifth aspect of the present embodiment provides a storage medium storing instructions including instructions of any one of the first aspect or the possible designs of the pantograph foreign object intrusion detection method, that is, the storage medium stores instructions that, when executed on a computer, perform the pantograph foreign object intrusion detection method according to any one of the first aspect or the possible designs of the first aspect. The storage medium refers to a carrier for storing data, and may include, but is not limited to, a computer-readable storage medium such as a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk and/or a Memory Stick (Memory Stick), and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, working details and technical effects of the foregoing readable storage medium provided in the fifth aspect of this embodiment, reference may be made to the method in any one of the first aspect or the first aspect, which is not described herein again.
A sixth aspect of the present embodiment provides a computer program product containing instructions which, when run on a computer, cause the computer to execute the pantograph foreign object intrusion detection method according to the first aspect or any one of the possible designs of the first aspect. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A pantograph foreign matter intrusion detection method is characterized by comprising the following steps:
acquiring a video image acquired by a monitoring camera in real time, wherein the monitoring camera is mounted on the roof of the vehicle and enables the view field of a lens to cover the area where the pantograph is located;
carrying out ROI (region of interest) positioning processing on the video image, and extracting a pantograph gray image;
segmenting the foreground and the background of the pantograph gray image by using a Gaussian mixture model to obtain a foreground object binary image, wherein the gray values of foreground pixel points in the foreground object binary image are uniform non-zero values, and the gray values of background pixel points in the foreground object binary image are zero values;
carrying out moving target detection processing on three frames of pantograph gray images with continuous collection time sequence by using a three-frame difference method to obtain a moving target binary image, wherein the three frames of pantograph gray images comprise the pantograph gray images and two frames of pantograph gray images with the collection time sequences positioned in front of the pantograph gray images, the gray values of moving target pixel points in the moving target binary image are uniform non-zero values, and the gray values of background pixel points in the moving target binary image are zero values;
carrying out image logic and operation processing on the foreground object binary image and the moving target binary image to obtain a new binary image;
and carrying out contour detection processing on the new binary image, and determining a target contour which is obtained by detection and has an enclosing area exceeding a preset area threshold value in the contour as an invading foreign body contour, wherein the target contour refers to a closed contour enclosed by a plurality of adjacent edge pixel points in the new binary image, and the edge pixel points refer to pixel points which have non-zero gray values and at least one adjacent pixel point in eight adjacent pixel points around and have a gray value of zero.
2. The method for detecting intrusion of foreign object into a pantograph according to claim 1, wherein the ROI positioning process is performed on the video image to extract a grayscale image of the pantograph, and the method comprises:
sliding screenshot windows in the transverse direction and the longitudinal direction of the video image respectively by preset step lengths, and intercepting to obtain a plurality of video sub-images with the standard size of the sample;
respectively extracting corresponding HOG (histogram of oriented gradient) features of the HOG for each video sub-image in the plurality of video sub-images;
for each video subimage in the multiple video subimages, introducing the HOG feature of the corresponding direction gradient histogram into a Support Vector Machine (SVM) classification model which is trained based on the HOG feature of a positive sample and a negative sample, judging whether the corresponding video subimage contains a pantograph graph or not, and recording the position of the area of the video subimage in the video image when the video subimage is judged to contain the pantograph graph, wherein the positive sample in the positive sample and the negative sample refers to a sample image which has a sample standard size and contains the pantograph graph, and the negative sample in the positive sample and the negative sample refers to a sample image which has a sample standard size and does not contain the pantograph graph;
according to the position of the area, intercepting a pantograph image with a sample standard size from the video image;
and carrying out graying processing on the pantograph image to obtain the pantograph grayscale image.
3. The method for detecting intrusion of a foreign object into a pantograph according to claim 2, wherein the step of extracting a Histogram of Oriented Gradients (HOG) feature from each of the plurality of video sub-images comprises:
carrying out gamma correction processing on the video sub-image to obtain a new video sub-image;
respectively calculating to obtain corresponding transverse gradient components and longitudinal gradient components aiming at each pixel point in the new video subimage;
respectively calculating corresponding gradient amplitude and gradient direction angle according to corresponding transverse gradient component and longitudinal gradient component aiming at each pixel point in the new video subimage;
dividing the new video subimage into a plurality of cell units;
aiming at each cell unit in the plurality of cell units, respectively carrying out each angle subinterval of all corresponding pixel points in the q angle subintervalsPerforming statistical processing on the histogram to obtain a corresponding q-dimensional feature vector, wherein q represents a positive integer greater than 5, and the q angle subintervals are angle intervals [ -90 ]o,90o]Performing q equal division on all angle subintervals, and performing statistics on the histogram in a mode of accumulating the gradient amplitude of a pixel point when the gradient direction angle of the pixel point belongs to the angle subintervals;
splicing p adjacent cell units in the cell units to form a block, and connecting q-dimensional feature vectors of all the cell units in the p cell units in series to form p multiplied by q-dimensional feature vectors of the block;
scanning in the horizontal direction and the vertical direction of the video image by taking the size of the cell unit as a step length respectively to obtain a plurality of blocks and p × q-dimensional feature vectors corresponding to the blocks in the plurality of blocks;
and connecting p × q dimensional feature vectors of the blocks in series to form HOG features corresponding to the video sub-image.
4. The method for detecting intrusion of foreign object into a pantograph according to claim 1, wherein the segmentation of foreground and background is performed on the pantograph gray scale image by using a gaussian mixture model to obtain a binary image of a foreground object, comprising:
the acquisition timing sequence is positioned before the gray level image of the pantograph and is continuous
Figure 236419DEST_PATH_IMAGE001
A frame pantograph gray scale image, wherein,
Figure 143064DEST_PATH_IMAGE001
represents a positive integer greater than 10;
according to the above
Figure 390506DEST_PATH_IMAGE001
Frame pantograph gray image, extracting gray value of pixel point at each pixel position
Figure 992389DEST_PATH_IMAGE002
Wherein, in the step (A),
Figure 967167DEST_PATH_IMAGE003
indicates that is between
Figure 479051DEST_PATH_IMAGE004
A positive integer between (a) and (b),
Figure 490869DEST_PATH_IMAGE005
is shown in
Figure 973190DEST_PATH_IMAGE001
The first arranged in the gray level image of the frame pantograph from the morning to the evening according to the acquisition time sequence
Figure 971233DEST_PATH_IMAGE006
The acquisition time of the frame pantograph gray image,
Figure 727837DEST_PATH_IMAGE007
is shown in the first
Figure 35190DEST_PATH_IMAGE008
In the frame pantograph gray image
Figure 346086DEST_PATH_IMAGE009
The gray value of the pixel point at each pixel location,
Figure 413399DEST_PATH_IMAGE009
indicates that is between
Figure 149143DEST_PATH_IMAGE010
A positive integer between (a) and (b),
Figure 502764DEST_PATH_IMAGE011
is shown in
Figure 441901DEST_PATH_IMAGE001
Total number of pixel positions in the frame pantograph gray image;
to the said
Figure 562172DEST_PATH_IMAGE012
Each pixel position in the frame pantograph gray image updates the corresponding gaussian mixture model according to the following steps S331 to S338:
s331. pair
Figure 762210DEST_PATH_IMAGE013
And is in the range of
Figure 427677DEST_PATH_IMAGE014
The weight, mean and variance of each gaussian distribution in the gaussian mixture model corresponding to each pixel position are initialized, respectively, and then step S332 is performed, wherein,
Figure 106308DEST_PATH_IMAGE013
is initialized to 1, and the weight is initialized to
Figure 781003DEST_PATH_IMAGE015
Figure 835546DEST_PATH_IMAGE016
Is shown in the specification and
Figure 921183DEST_PATH_IMAGE014
the total number of Gaussian distributions in the Gaussian mixture model corresponding to each pixel position;
s332, judging a first matching condition
Figure 834912DEST_PATH_IMAGE017
If yes, go to step S333, otherwise go to step S334, wherein,
Figure 562566DEST_PATH_IMAGE018
is represented by the second
Figure 737195DEST_PATH_IMAGE014
The first pixel position corresponds to
Figure 10045DEST_PATH_IMAGE019
The current mean of the gaussian distribution of the number,
Figure 394759DEST_PATH_IMAGE020
represents the first
Figure 411256DEST_PATH_IMAGE021
The current variance of the gaussian distribution is calculated,
Figure 364693DEST_PATH_IMAGE021
indicates that is between
Figure 808444DEST_PATH_IMAGE022
A positive integer between (a) and (b),
Figure 211612DEST_PATH_IMAGE023
represents a positive number greater than or equal to 2.5;
s333, updating the second step according to the following formula
Figure 766222DEST_PATH_IMAGE021
Weight, mean and variance of the individual gaussian distributions:
Figure 40077DEST_PATH_IMAGE024
in the formula (I), the compound is shown in the specification,
Figure 779363DEST_PATH_IMAGE025
represents the first
Figure 889401DEST_PATH_IMAGE026
The updated weights of the gaussian distributions,
Figure 231390DEST_PATH_IMAGE027
represents the first
Figure 235118DEST_PATH_IMAGE026
The current weight of the gaussian distribution is given,
Figure 20671DEST_PATH_IMAGE028
represents the first
Figure 616343DEST_PATH_IMAGE026
The updated mean of the individual gaussian distributions,
Figure 637388DEST_PATH_IMAGE029
represents the first
Figure 902147DEST_PATH_IMAGE026
The updated variance of the gaussian distribution,
Figure 248815DEST_PATH_IMAGE030
a preset learning update rate of the weight value is represented,
Figure 192500DEST_PATH_IMAGE031
a learning update rate representing the mean value and having
Figure 876292DEST_PATH_IMAGE032
Figure 589033DEST_PATH_IMAGE033
Indicating a matrix transpose symbol, and then performing step S337;
s334, updating the first step according to the following formula
Figure 716389DEST_PATH_IMAGE026
Weight of each gaussian distribution:
Figure 881791DEST_PATH_IMAGE034
in the formula (I), the compound is shown in the specification,
Figure 103693DEST_PATH_IMAGE025
represents the first
Figure 936520DEST_PATH_IMAGE026
The updated weights of the gaussian distributions,
Figure 234778DEST_PATH_IMAGE030
a preset learning update rate representing the weight value, and then step S335 is performed;
s335, judging whether the first matching condition is matched with the second matching condition
Figure 280618DEST_PATH_IMAGE014
In a Gaussian mixture model corresponding to individual pixel positions
Figure 853682DEST_PATH_IMAGE016
If the gaussian distributions are not satisfied, executing step S336, otherwise executing step S337;
s336. rejecting the
Figure 931229DEST_PATH_IMAGE016
The Gaussian distribution with the minimum current weight in the Gaussian distributions is added with a new Gaussian distribution to obtain a new Gaussian distribution
Figure 400387DEST_PATH_IMAGE016
Gaussian distribution, and then performing step S337 in which the mean value of the new gaussian distribution is initialized to
Figure 805961DEST_PATH_IMAGE035
;
S337, in the pair of
Figure 635245DEST_PATH_IMAGE014
Normalizing the weight values of the Gaussian distributions in the Gaussian mixture model corresponding to each pixel position to enable the sum of the new weight values of all the Gaussian distributions to be equal to 1, and then executing the step S338;
s338, judgment
Figure 318031DEST_PATH_IMAGE013
Is equal to
Figure 817145DEST_PATH_IMAGE012
If yes, the updating of the Gaussian mixture model is ended, otherwise, the method is right
Figure 834648DEST_PATH_IMAGE036
Updating by adding 1, and then returning to execute the step S332;
for each pixel position, according to a ratio
Figure 952777DEST_PATH_IMAGE037
All the Gaussian distributions in the corresponding Gaussian mixture models are sequentially arranged from small to large to obtain corresponding Gaussian distribution queues, wherein,
Figure 273425DEST_PATH_IMAGE038
the current weight value representing the gaussian distribution,
Figure 943440DEST_PATH_IMAGE039
representing the current variance of the gaussian distribution;
aiming at each pixel position, determining the corresponding and front-ranked pixel position in the corresponding Gaussian distribution queue
Figure 933393DEST_PATH_IMAGE040
A gaussian distribution in which, among others,
Figure 979846DEST_PATH_IMAGE040
taking values according to the following formula:
Figure 355333DEST_PATH_IMAGE041
in the formula (I), the compound is shown in the specification,
Figure 337196DEST_PATH_IMAGE042
expressed in equation in parentheses
Figure 939078DEST_PATH_IMAGE043
Reach more than the preset weight threshold
Figure 648277DEST_PATH_IMAGE044
Is on the positive integer variable
Figure 19216DEST_PATH_IMAGE045
The value of the process is taken as the value,
Figure 906400DEST_PATH_IMAGE046
indicates that is between
Figure 995579DEST_PATH_IMAGE047
A positive integer between (a) and (b),
Figure 508469DEST_PATH_IMAGE048
indicates that the queue number in the corresponding Gaussian distribution queue is
Figure 468334DEST_PATH_IMAGE046
The current weight of the gaussian distribution;
determining a second matching condition for each pixel position
Figure 323158DEST_PATH_IMAGE049
For before corresponding
Figure 761617DEST_PATH_IMAGE040
Whether any Gaussian distribution in the Gaussian distributions is established or not is judged, if yes, the corresponding pixel point in the pantograph gray level image is judged to be a background pixel point, otherwise, the corresponding pixel point is judged to be a foreground pixel point, wherein,
Figure 953564DEST_PATH_IMAGE050
representing the gray values of corresponding pixel points in the pantograph gray image,
Figure 440040DEST_PATH_IMAGE051
is shown before corresponding
Figure 918295DEST_PATH_IMAGE040
The current mean of the gaussian distribution in the gaussian distribution,
Figure 123011DEST_PATH_IMAGE052
is shown before corresponding
Figure 118649DEST_PATH_IMAGE040
A current variance of the Gaussian distribution in the Gaussian distribution;
and performing image binarization processing on the pantograph gray level image according to the judgment result of the pixel point type to obtain a foreground object binarization image.
5. The method for detecting intrusion of foreign matters into a pantograph according to claim 1, wherein the moving object detection processing is performed on three frames of pantograph gray-scale images with continuous acquisition time sequence by using a three-frame difference method to obtain a binary image of the moving object, and the method comprises the following steps:
respectively calculating to obtain a first difference image of a first two-frame pantograph gray image in the three-frame pantograph gray image and a second difference image of a second two-frame pantograph gray image in the three-frame pantograph gray image, wherein the gray value of each pixel point in the first difference image is the absolute value of the gray value difference of the corresponding pixel point in the first two-frame pantograph gray image, and the gray value of each pixel point in the second difference image is the absolute value of the gray value difference of the corresponding pixel point in the second two-frame pantograph gray image;
respectively carrying out image binarization processing on the first differential image and the second differential image according to the comparison result of the gray value of the pixel point and a preset threshold value to obtain corresponding differential binarization images;
and performing image logic and operation processing on the two differential binarization images obtained through the image binarization processing to obtain the moving target binarization image.
6. The pantograph foreign object intrusion detection method according to claim 1, wherein before the logical and operation processing of the images on the foreground object binarized image and the moving target binarized image, the method further comprises:
and carrying out morphological open operation processing on the foreground object binary image and/or the moving target binary image to obtain a new moving target binary image and/or a new moving target binary image for carrying out the logic and operation processing.
7. A pantograph foreign matter intrusion detection device is characterized by comprising an image acquisition module, an image positioning module, a foreground segmentation module, a motion detection module, an operation processing module and a contour detection module;
the image acquisition module is used for acquiring a video image acquired by a monitoring camera in real time, wherein the monitoring camera is arranged on the roof of the vehicle and enables the view field of the lens to cover the area where the pantograph is located;
the image positioning module is in communication connection with the image acquisition module and is used for performing ROI (region of interest) positioning processing on the video image and extracting a pantograph gray image;
the foreground segmentation module is in communication connection with the image positioning module and is used for performing foreground and background segmentation processing on the pantograph gray level image by using a Gaussian mixture model to obtain a foreground object binary image, wherein gray values of foreground pixel points in the foreground object binary image are uniform non-zero values, and gray values of background pixel points in the foreground object binary image are zero values;
the motion detection module is in communication connection with the image positioning module and is used for performing motion target detection processing on three frames of pantograph gray level images with continuous collection time sequence by using a three-frame difference method to obtain a motion target binary image, wherein the three frames of pantograph gray level images comprise the pantograph gray level images and two frames of pantograph gray level images with collection time sequences positioned in front of the pantograph gray level images, the gray level values of motion target pixel points in the motion target binary image are uniform non-zero values, and the gray level value of background pixel points in the motion target binary image is zero;
the operation processing module is respectively in communication connection with the foreground segmentation module and the motion detection module and is used for carrying out image logic and operation processing on the foreground object binary image and the motion target binary image to obtain a new binary image;
the contour detection module is in communication connection with the operation processing module and is used for performing contour detection processing on the new binary image and determining a target contour which is obtained through detection and has an enclosing area exceeding a preset area threshold value in the contour as an invading foreign body contour, wherein the target contour refers to a closed contour which is enclosed by a plurality of adjacent edge pixel points in the new binary image, and the edge pixel points refer to pixel points which have a gray value of non-zero value and at least one adjacent pixel point in eight adjacent pixel points around and have a gray value of zero value.
8. A computer device, comprising a memory, a processor and a transceiver, wherein the memory is used for storing a computer program, the transceiver is used for transmitting and receiving data, and the processor is used for reading the computer program and executing the pantograph foreign object intrusion detection method according to any one of claims 1 to 6.
9. A pantograph foreign matter intrusion detection system is characterized by comprising a monitoring camera (1), a processing device (2), an on-board host (3), a display (4) and an alarm prompting device (5), wherein the monitoring camera (1) is in communication connection with the processing device (2), the processing device (2) is in communication connection with the on-board host (3), and the on-board host (3) is in communication connection with the display (4) and the alarm prompting device (5) respectively;
the monitoring camera (1) is used for being installed on the roof of a vehicle and enabling the lens view to cover the area where the pantograph is located, and is used for transmitting the video images acquired in real time to the processing equipment (2);
the processing device (2) is used for executing the pantograph foreign object intrusion detection method according to any one of claims 1 to 6, acquiring a foreign object intrusion detection result and transmitting the video image and the foreign object intrusion detection result to the vehicle-mounted host (3);
the vehicle-mounted host (3) is used for storing the video image and the foreign object intrusion detection result, distributing and transmitting the video image and the foreign object intrusion detection result to the display (4), and sending an alarm trigger instruction to the alarm prompting device (5) when determining that a foreign object intrusion condition is met according to the foreign object intrusion detection result;
the display (4) is used for displaying the video image and the foreign object intrusion detection result;
and the alarm prompting device (5) is used for starting an alarm prompting action when the alarm triggering instruction is received.
10. A storage medium having stored thereon instructions for performing the pantograph foreign object intrusion detection method according to any one of claims 1 to 6 when the instructions are run on a computer.
CN202110848718.6A2021-07-272021-07-27Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage mediumPendingCN113298059A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110848718.6ACN113298059A (en)2021-07-272021-07-27Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110848718.6ACN113298059A (en)2021-07-272021-07-27Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium

Publications (1)

Publication NumberPublication Date
CN113298059Atrue CN113298059A (en)2021-08-24

Family

ID=77331151

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110848718.6APendingCN113298059A (en)2021-07-272021-07-27Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium

Country Status (1)

CountryLink
CN (1)CN113298059A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113763491A (en)*2021-08-262021-12-07浙江中烟工业有限责任公司 A kind of visual detection method of tobacco barrel residue
CN113794815A (en)*2021-08-252021-12-14中科云谷科技有限公司Method, device and controller for extracting video key frame
CN113859312A (en)*2021-09-302021-12-31中车青岛四方机车车辆股份有限公司 Pantograph fault alarm method, device and rail vehicle based on vehicle-mounted PHM
CN113935962A (en)*2021-09-292022-01-14常州市新创智能科技有限公司Method for detecting wool ball of glass fiber cloth
CN114821076A (en)*2022-04-262022-07-29国电南瑞南京控制系统有限公司 A foreign object monitoring method, device and storage medium in an intelligent power distribution room
CN115100580A (en)*2022-08-232022-09-23浙江大华技术股份有限公司Foreign matter detection method, device, terminal and computer readable storage medium
CN115170954A (en)*2022-06-252022-10-11苏州奈特力智能科技有限公司Pantograph foreign body suspension real-time detection method based on image recognition
CN115359032A (en)*2022-08-312022-11-18南京慧尔视智能科技有限公司Air foreign matter detection method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111738342A (en)*2020-06-242020-10-02西南交通大学 A pantograph foreign body detection method, storage medium and computer equipment
CN112288717A (en)*2020-10-292021-01-29哈尔滨市科佳通用机电股份有限公司Method for detecting foreign matters on side part of motor train unit train

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111738342A (en)*2020-06-242020-10-02西南交通大学 A pantograph foreign body detection method, storage medium and computer equipment
CN112288717A (en)*2020-10-292021-01-29哈尔滨市科佳通用机电股份有限公司Method for detecting foreign matters on side part of motor train unit train

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
夏永泉等: "一种简单有效的运动目标检测算法", 《计算机测量与控制》*

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113794815A (en)*2021-08-252021-12-14中科云谷科技有限公司Method, device and controller for extracting video key frame
CN113763491A (en)*2021-08-262021-12-07浙江中烟工业有限责任公司 A kind of visual detection method of tobacco barrel residue
CN113763491B (en)*2021-08-262024-03-12浙江中烟工业有限责任公司Visual detection method for tobacco shred barrel residues
CN113935962A (en)*2021-09-292022-01-14常州市新创智能科技有限公司Method for detecting wool ball of glass fiber cloth
CN113859312A (en)*2021-09-302021-12-31中车青岛四方机车车辆股份有限公司 Pantograph fault alarm method, device and rail vehicle based on vehicle-mounted PHM
CN114821076A (en)*2022-04-262022-07-29国电南瑞南京控制系统有限公司 A foreign object monitoring method, device and storage medium in an intelligent power distribution room
CN115170954A (en)*2022-06-252022-10-11苏州奈特力智能科技有限公司Pantograph foreign body suspension real-time detection method based on image recognition
CN115100580A (en)*2022-08-232022-09-23浙江大华技术股份有限公司Foreign matter detection method, device, terminal and computer readable storage medium
CN115359032A (en)*2022-08-312022-11-18南京慧尔视智能科技有限公司Air foreign matter detection method and device, electronic equipment and storage medium

Similar Documents

PublicationPublication DateTitle
CN113298059A (en)Pantograph foreign matter intrusion detection method, device, computer equipment, system and storage medium
Liu et al.A review of applications of visual inspection technology based on image processing in the railway industry
CN112800860B (en)High-speed object scattering detection method and system with coordination of event camera and visual camera
CN113158850B (en)Ship driver fatigue detection method and system based on deep learning
CN109871799B (en)Method for detecting mobile phone playing behavior of driver based on deep learning
CN111260629A (en)Pantograph structure abnormity detection algorithm based on image processing
CN103442209A (en)Video monitoring method of electric transmission line
CN111626170B (en)Image recognition method for railway side slope falling stone intrusion detection
CN116665080A (en)Unmanned aerial vehicle deteriorated insulator detection method and system based on target recognition
CN107911663A (en) An intelligent identification and early warning system for dangerous behaviors of elevator passengers based on computer vision detection
CN115600124A (en)Subway tunnel inspection system and inspection method
CN113111840A (en)Method for early warning violation and dangerous behaviors of operators on fully mechanized coal mining face
CN111964763B (en)Method for detecting intermittent driving behavior of automobile in weighing area of dynamic flat-plate scale
CN103839085A (en)Train carriage abnormal crowd density detection method
CN103077387B (en)Automatic detection method for freight train carriage in video
CN111561967A (en)Real-time online detection method and system for pantograph-catenary operation state
CN111160224B (en) A high-speed rail catenary foreign object detection system and method based on FPGA and horizon segmentation
CN109191492B (en) An intelligent video black smoke vehicle detection method based on contour analysis
CN108563986A (en)Earthquake region electric pole posture judgment method based on wide-long shot image and system
CN105046285B (en)A kind of abnormal behaviour discrimination method based on kinematic constraint
CN120014549A (en) Method and system for detecting construction workers crossing dangerous areas based on neural network
CN112508892B (en)Method and system for detecting tiny foreign matters on railway track surface based on machine vision
CN112508893B (en)Method and system for detecting tiny foreign matters between double rails of railway based on machine vision
CN114066895B (en)Detection method and device for pantograph slide plate
CN113362330B (en)Pantograph cavel real-time detection method, device, computer equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20210824


[8]ページ先頭

©2009-2025 Movatter.jp