Disclosure of Invention
The invention aims to provide an intelligent analysis management system and method for substation equipment based on edge calculation, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the intelligent analysis management method for the substation equipment based on edge calculation, which comprises the following steps:
Step S100: in a transformer substation, taking a certain electrical device as a target device, collecting images of the target device under various angles to form a training set, and training through a target recognition algorithm to obtain a recognition model of the target device;
Step S200: deploying the recognition model to a plurality of equipment gesture detectors, sequentially connecting the plurality of equipment gesture detectors, collecting video information of target equipment by any equipment gesture detector, and judging the change direction of the gesture of the target equipment through the image change of the target equipment in an image frame in the video information;
Step S300: setting a first direction according to the connection direction of the equipment posture detector, and sending a positioning data packet to the next equipment posture detector by the equipment posture detector according to the direction of judging the posture change of the target equipment by the equipment posture detector;
Step S400: each equipment posture detector gathers the change direction information in the received positioning data packet, judges the equipment posture detector pointed by the posture of the target equipment change, and sends information reminding to related management personnel.
Further, step S100 includes:
Step S101: combining the visual angle and the distance to obtain a plurality of image acquisition positions, acquiring an image of the target equipment, marking equipment characteristics of the target equipment in the image to form an image training set, wherein the equipment characteristics comprise different appearance information of equipment of the same type;
step S102: training through a target recognition algorithm to obtain a recognition model of the target equipment, wherein the recognition model can recognize the target equipment in an offline mode of the recognition equipment, and recognize the equipment characteristics of the target equipment.
Further, step S200 includes:
Step S201: forming a detector sequence L by n equipment gesture detectors;
step S202: the equipment gesture detector acquires image information of the transformer substation, identifies target equipment in the image information through an identification model, and acquires the image position of the target equipment in a picture of the view angle of the equipment gesture detector;
When the same equipment gesture detector executes the identification of more than one target equipment type, the adjacent equipment gesture detector is informed of which equipment is the target equipment monitored by itself through data communication, so that erroneous judgment caused by error comparison information contrast when the appearance of the electric energy equipment is similar is prevented;
Step S203: and sampling the video stream in the equipment posture detector to obtain an image frame of the target equipment image, comparing the differences of the images at the same image position in the image frames sampled by the same equipment posture detector, and taking the change direction of the image differences as the change direction of the posture of the target equipment.
Further, step S203 includes:
step S21: acquiring an ith image frame of a kth equipment posture detector in a detector sequence L, acquiring an image area Di of a target equipment in the ith image frame, carrying out numerical conversion on pixels in the image area Di, and acquiring numerical information of each pixel, wherein the numerical conversion mode comprises gray conversion or RGB conversion, so as to obtain an image numerical value Vi of the image area Di;
The image value is numerical value information corresponding to each pixel obtained after the image in the image frame is subjected to numerical value conversion, and in order to lighten the operation and judgment pressure of the edge detection equipment, the image in the image frame is firstly segmented, the image information is converted into the numerical value information, and the judgment speed and the response speed of the edge detection equipment are further improved;
step S22: acquiring the positions pti of boundaries ei and ei of an image area Di in an ith image frame, acquiring an image area Di+1 at the same position pti+1 as pti in an ith+1 image frame of a kth device posture detector by using the same boundary area as the boundary ei, and performing pixel numerical conversion on the image area Di+1 in the same manner as Di to obtain an image numerical value Vi+1 of the image area Di+1;
Step S23: subtracting the pixel value at the same position in the image value Vi from the image value vi+1 to obtain an image difference value Vdif, obtaining a pixel Vmax corresponding to the maximum value and a pixel Vmin corresponding to the minimum value in the image difference value Vdif, and drawing a connecting line along the direction from Vmax to Vmin to obtain a change vector f, wherein the direction of the change vector is from Vmax to Vmin, and the direction of the change vector f is the direction of the change of the posture of the kth equipment;
When the image is converted into a numerical value, the numerical value in the pixel represents the visual angle of the pixel in microcosmic, and when the target equipment is shifted, the numerical value of the pixel at the same position is also changed, so that the change direction of the movement of the target equipment under the visual angle of the equipment posture detector is obtained;
the direction under the view angle of the equipment posture detector specifically means that the movement change of the target equipment before and after displacement is closer to the first direction of the equipment posture detector or closer to the opposite direction of the first direction of the equipment posture detector;
When the target equipment changes once in gesture, the movement directions of the target equipment are different under the respective visual angles of the gesture detectors of different equipment, the movement of the target equipment under the respective visual angles of the gesture detectors of different equipment are mutually verified, and the problem that the movement direction of the target equipment is misjudged by a single equipment or the movement condition of the target equipment is difficult to judge under certain visual angles is solved.
Further, step S300 includes:
Step S301: acquiring a dirk and a first direction dir0 of a kth device posture detector in the detector sequence L for judging the posture change direction of a target device, acquiring a projection dir 'k of dirk on dir0, transmitting a positioning data packet by the k device posture detectors along the first direction when dir 'k is the same as dir0, transmitting the positioning data packet by the k device posture detectors along the opposite direction of the first direction when dir 'k is opposite to dir0, and setting a positioning coefficient in the data packet;
simplifying the motion under the two-dimensional image view angle into a one-dimensional motion direction through the steps of obtaining the vector and the vector projection, and further adapting to the decentralization judgment mode of edge calculation;
step S302: the equipment gesture detector forwards the received positioning data packet along the sending direction of the positioning data packet, a unit positioning coefficient a0 is set, the initial value of the positioning coefficient of the positioning data packet is a0, and when the positioning data packet is forwarded by one equipment gesture detector, the positioning coefficient in the positioning data packet is increased by one unit positioning coefficient a0;
Step S303: the equipment gesture detector acquires the sending direction of the positioning data packet, stores the positioning coefficient in the local when the sending direction of the positioning data packet is the same as the first direction, and adds a negative sign before the positioning coefficient and stores the positioning coefficient in the local when the sending direction of the positioning data packet is opposite to the first direction;
After the equipment posture detector judges that the movement of the target equipment is reversed, a positioning data packet is sent to the direction judged by the equipment posture detector, the positioning coefficient of the positioning data packet is changed along with the propagation of the positioning data packet, and when the positioning coefficient is closer to 0, the movement direction of the closer target equipment is indicated.
Further, step S400 includes:
step S401: after a positioning data packet generated from any one endpoint in the detector sequence L is sent to the other endpoint, the kth equipment gesture detector acquires all positioning coefficients of the local cache, sums all positioning coefficients, takes an absolute value from the summed value, and obtains a direction comparison value Rk of the kth equipment gesture detector;
Step S402: broadcasting the corresponding direction comparison value to all the equipment gesture detectors in the detector sequence L by all the equipment gesture detectors, and taking the equipment gesture detector with the smallest direction comparison value as a target detector;
step S403: the target detector sends a prompt message of the change of the target equipment to the relevant manager.
In order to better realize the method, the intelligent analysis management system of the substation equipment based on edge calculation is also provided, and the system comprises:
The system comprises an identification model management subsystem and an edge identification subsystem, wherein the identification model management subsystem is used for training and managing an identification model of target equipment, and the edge identification subsystem is used for managing a detector sequence and identifying the posture change of the target equipment; the edge recognition subsystem is formed by sequentially connecting a plurality of identical equipment gesture detectors according to the network topology of a bus type structure, and the equipment gesture detectors comprise: the system comprises an image comparison module, a communication module and a positioning analysis module, wherein the image comparison module is used for comparing the image difference to judge the direction of the image difference change, the communication module is used for carrying out data interaction with each equipment gesture detector through an edge network, and the positioning analysis module is used for judging the direction of the target equipment gesture change.
Further, the image comparison module includes: the system comprises a target identification unit, a numerical value conversion unit, a data comparison unit and a direction judgment unit, wherein the target identification unit is used for identifying an image of target equipment, the numerical value conversion unit is used for converting the image into a numerical value, the data comparison unit is used for comparing the numerical values of the images of two adjacent frames, and the direction judgment unit is used for generating a change vector and judging the change direction of the posture of the target equipment;
Further, the communication module includes: the system comprises a positioning data packet generating unit, a positioning data packet forwarding unit and an information interaction unit, wherein the positioning data packet generating unit is used for generating a positioning data packet comprising positioning coefficients according to local judgment conditions, the positioning data packet forwarding unit is used for forwarding the positioning data packet which is not locally generated, adjusting the positioning coefficients in the forwarded positioning data packet, and the information interaction unit is used for carrying out data interaction with each equipment gesture detector in a management detector sequence;
Further, the positioning analysis module includes: the system comprises a data caching unit, a comparison unit and an information prompting unit, wherein the data caching unit is used for storing positioning coefficients in each positioning data packet, the comparison unit is used for comparing direction comparison values and judging whether the direction comparison values are target detectors or not, and the information prompting unit is used for sending prompting information of target equipment change to related management personnel.
Compared with the prior art, the invention has the following beneficial effects: according to the invention, through an edge computing system with decentralization, the change of equipment states such as equipment displacement or equipment lodging in a transformer substation is monitored, so that large-scale electric energy equipment management can be handled. The invention has the advantages that the data processing capability is placed at a place closer to a data source and a terminal, the delay of data transmission is reduced, the faster data processing and response speed are realized, and only necessary data is interacted with the central server, so that the network flow is reduced, the bandwidth cost is saved, the system can still continue to operate even if the system is disconnected or disconnected with the central server, the reliability and the stability of the system are improved, the burden of a cloud end is reduced, and the expansibility and the stability of the whole system are improved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, 2, 3 and 4, the present invention provides the following technical solutions:
Step S100: in a transformer substation, taking a certain electrical device as a target device, collecting images of the target device under various angles to form a training set, and training through a target recognition algorithm to obtain a recognition model of the target device;
wherein, step S100 includes:
Step S101: combining the visual angle and the distance to obtain a plurality of image acquisition positions, acquiring an image of the target equipment, marking equipment characteristics of the target equipment in the image to form an image training set, wherein the equipment characteristics comprise different appearance information of equipment of the same type;
step S102: training through a target recognition algorithm to obtain a recognition model of the target equipment, wherein the recognition model can recognize the target equipment in an offline mode of the recognition equipment, and recognize the equipment characteristics of the target equipment.
Step S200: deploying the recognition model to a plurality of equipment gesture detectors, sequentially connecting the plurality of equipment gesture detectors, collecting video information of target equipment by any equipment gesture detector, and judging the change direction of the gesture of the target equipment through the image change of the target equipment in an image frame in the video information;
wherein, step S200 includes:
Step S201: forming a detector sequence L by n equipment gesture detectors;
step S202: the equipment gesture detector acquires image information of the transformer substation, identifies target equipment in the image information through an identification model, and acquires the image position of the target equipment in a picture of the view angle of the equipment gesture detector;
Step S203: sampling the video stream in the equipment posture detector to obtain an image frame of the target equipment image, comparing the differences of the images at the same image position in the image frame sampled by the same equipment posture detector, and taking the change direction of the image differences as the change direction of the posture of the target equipment;
Wherein step S203 includes:
step S21: acquiring an ith image frame of a kth equipment posture detector in a detector sequence L, acquiring an image area Di of a target equipment in the ith image frame, carrying out numerical conversion on pixels in the image area Di, and acquiring numerical information of each pixel, wherein the numerical conversion mode comprises gray conversion or RGB conversion, so as to obtain an image numerical value Vi of the image area Di;
step S22: acquiring the positions pti of boundaries ei and ei of an image area Di in an ith image frame, acquiring an image area Di+1 at the same position pti+1 as pti in an ith+1 image frame of a kth device posture detector by using the same boundary area as the boundary ei, and performing pixel numerical conversion on the image area Di+1 in the same manner as Di to obtain an image numerical value Vi+1 of the image area Di+1;
In the first embodiment, in the ith image frame, there are two image areas D1i and D2i, the two image positions include two different images on the same target device, respectively, D1i includes an image C1, D2i includes an image C2, when the target device rotates, in the (i+1) th image frame, frame selection is performed according to the ith image, an image d1i+1 is obtained at a position corresponding to D1, an image d2i+1 is obtained at a position corresponding to D2, pixels corresponding to C1 in d1i+1 become smaller, and pixels corresponding to C2 in d2i+1 increase;
In the second embodiment, the image C3 of the target device is in the D1i region of the i-th image frame, in the i+1-th image frame, the target device image is in the d2i+1 region of the i+1-th image frame, the D1i region of the i-th image frame is compared with the d1i+1 region of the i+1-th image frame, the pixels of the image C3 are reduced, the D2i region of the i-th image frame is compared with the d2i+1 region of the i+1-th image frame, and the pixels of the image C3 are increased;
Step S23: subtracting the pixel value at the same position in the image value Vi from the image value vi+1 to obtain an image difference value Vdif, obtaining a pixel Vmax corresponding to the maximum value and a pixel Vmin corresponding to the minimum value in the image difference value Vdif, and drawing a connecting line along the direction from Vmax to Vmin to obtain a change vector f, wherein the direction of the change vector is from Vmax to Vmin, and the direction of the change vector f is the direction of the change of the posture of the kth equipment;
Under the view angle of the equipment posture detector, the midpoint of the view angle of the detector is obtained on a straight line parallel to the first direction of the equipment posture detector, and when the first direction points to the right of equipment posture detection, the opposite direction of the first direction points to the left of the equipment posture detector;
Establishing a plane rectangular coordinate system at a monitoring view angle of the equipment attitude detector, wherein a first direction is a horizontal position, a midpoint is (0, 0), and direction detection points (-1, 0) and (1, 0) are established, and when an f vector points from (5, 5) to (3, 4), the movement direction is closer to (1, 0), namely the right side of a picture;
Step S300: setting a first direction according to the connection direction of the equipment posture detector, and sending a positioning data packet to the next equipment posture detector by the equipment posture detector according to the direction of judging the posture change of the target equipment by the equipment posture detector;
Wherein, step S300 includes:
Step S301: acquiring a dirk and a first direction dir0 of a kth device posture detector in the detector sequence L for judging the posture change direction of a target device, acquiring a projection dir 'k of dirk on dir0, transmitting a positioning data packet by the k device posture detectors along the first direction when dir 'k is the same as dir0, transmitting the positioning data packet by the k device posture detectors along the opposite direction of the first direction when dir 'k is opposite to dir0, and setting a positioning coefficient in the data packet;
Connecting the (0, 0) point with the end point (3, 4) of the f to obtain a vector dir= (3, 4);
step S302: the equipment gesture detector forwards the received positioning data packet along the sending direction of the positioning data packet, a unit positioning coefficient a0 is set, the initial value of the positioning coefficient of the positioning data packet is a0, and when the positioning data packet is forwarded by one equipment gesture detector, the positioning coefficient in the positioning data packet is increased by one unit positioning coefficient a0;
Step S303: the equipment gesture detector acquires the sending direction of the positioning data packet, stores the positioning coefficient in the local when the sending direction of the positioning data packet is the same as the first direction, and adds a negative sign before the positioning coefficient and stores the positioning coefficient in the local when the sending direction of the positioning data packet is opposite to the first direction;
Step S400: each equipment posture detector gathers the change direction information in the received positioning data packet, judges the equipment posture detector pointed by the posture of the target equipment change, and sends information reminding to related management personnel;
Wherein, step S400 includes:
step S401: after a positioning data packet generated from any one endpoint in the detector sequence L is sent to the other endpoint, the kth equipment gesture detector acquires all positioning coefficients of the local cache, sums all positioning coefficients, takes an absolute value from the summed value, and obtains a direction comparison value Rk of the kth equipment gesture detector;
Step S402: broadcasting the corresponding direction comparison value to all the equipment gesture detectors in the detector sequence L by all the equipment gesture detectors, and taking the equipment gesture detector with the smallest direction comparison value as a target detector;
step S403: the target detector sends a prompt message of the change of the target equipment to the relevant manager.
FIG. 4 is a top view of an object motion and detector array, with the solid arrow direction showing a first direction, the detector sequence L including 5 device pose detectors B1, B2, B3, B4, and B5 arranged in the first direction, the dashed arrow direction showing the direction of change of pose of the target device, the target device at the i-th frame of the device pose detectors, at the Qi position in the plane, at the i+1-th frame, at the Qi+1 position in the plane;
For B1 and B2, the target device moves to the right, consistent with the first direction, and for B3, B4 and B5, the target device moves to the left, opposite the first direction;
the positioning data packet generated by B1 sequentially passes through B2, B3, B4 and B5, and the positioning coefficients together with the local positioning coefficients are as follows: +a0, +2a0, +3a0, +4a0 and +5a0;
the positioning data packet generated by B2 sequentially passes through B3, B4 and B5, and the positioning coefficients together with the local positioning coefficients are sequentially as follows: +a0, +2a0, +3a0 and +4a0;
The positioning data packet generated by B5 sequentially passes through B4, B3, B2 and B1, and the positioning coefficients together with the local positioning coefficients are as follows: -a0, -2a0, -3a0, -4a0 and-5 a0;
the positioning data packet generated by B4 sequentially passes through B3, B2 and B1, and the positioning coefficients together with the local positioning coefficients are sequentially as follows: -a0, -2a0, -3a0 and-4 a0;
the positioning data packet generated by B3 sequentially passes through B2 and B1, and the positioning coefficients together with the local positioning coefficients are sequentially as follows: -a0, -2a0 and-3 a0;
After the transmission is completed, the direction comparison value of B1 is 11a0, the direction comparison value of B2 is 6a0, the direction comparison value of B3 is a0, the direction comparison value of B4 is 4a0, the direction comparison value of B5 is 8a0, and the posture of the target device changes to B3.
The system comprises:
The system comprises an identification model management subsystem and an edge identification subsystem, wherein the identification model management subsystem is used for training and managing an identification model of target equipment, and the edge identification subsystem is used for managing a detector sequence and identifying the posture change of the target equipment;
The edge recognition subsystem is formed by sequentially connecting a plurality of identical equipment gesture detectors according to the network topology of a bus type structure, and the equipment gesture detectors comprise: the system comprises an image comparison module, a communication module and a positioning analysis module;
The image comparison module is used for comparing the image difference to judge the direction of the image difference change, wherein the image comparison module comprises: the system comprises a target identification unit, a numerical value conversion unit, a data comparison unit and a direction judgment unit, wherein the target identification unit is used for identifying an image of target equipment, the numerical value conversion unit is used for converting the image into a numerical value, the data comparison unit is used for comparing the numerical values of the images of two adjacent frames, and the direction judgment unit is used for generating a change vector and judging the change direction of the posture of the target equipment;
The communication module is used for carrying out data interaction with each equipment gesture detector through an edge network, wherein the communication module comprises: the system comprises a positioning data packet generating unit, a positioning data packet forwarding unit and an information interaction unit, wherein the positioning data packet generating unit is used for generating a positioning data packet comprising positioning coefficients according to local judgment conditions, the positioning data packet forwarding unit is used for forwarding the positioning data packet which is not locally generated, adjusting the positioning coefficients in the forwarded positioning data packet, and the information interaction unit is used for carrying out data interaction with each equipment gesture detector in a management detector sequence;
The positioning analysis module is used for judging the direction of the posture change of the target equipment, and comprises: the system comprises a data caching unit, a comparison unit and an information prompting unit, wherein the data caching unit is used for storing positioning coefficients in each positioning data packet, the comparison unit is used for comparing direction comparison values and judging whether the direction comparison values are target detectors or not, and the information prompting unit is used for sending prompting information of target equipment change to related management personnel.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.